Test Report: KVM_Linux_containerd 12230

                    
                      098adff14f97e55ded5626b0a90c858c09622337:2021-08-13:19986
                    
                

Test fail (9/269)

x
+
TestAddons/parallel/CSI (364.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 11.326687ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210813200824-393438 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813200824-393438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813200824-393438 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210813200824-393438 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [ac97e3e5-6a9e-42fc-98f9-0b7b5e76359e] Pending
helpers_test.go:343: "task-pv-pod" [ac97e3e5-6a9e-42fc-98f9-0b7b5e76359e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: timed out waiting for the condition ****
addons_test.go:544: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210813200824-393438 -n addons-20210813200824-393438
addons_test.go:544: TestAddons/parallel/CSI: showing logs for failed pods as of 2021-08-13 20:18:04.629294177 +0000 UTC m=+613.648332244
addons_test.go:544: (dbg) Run:  kubectl --context addons-20210813200824-393438 describe po task-pv-pod -n default
addons_test.go:544: (dbg) kubectl --context addons-20210813200824-393438 describe po task-pv-pod -n default:
Name:         task-pv-pod
Namespace:    default
Priority:     0
Node:         addons-20210813200824-393438/192.168.39.71
Start Time:   Fri, 13 Aug 2021 20:12:04 +0000
Labels:       app=task-pv-pod
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
task-pv-container:
Container ID:   
Image:          nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      k8s-minikube
GCP_PROJECT:                     k8s-minikube
GCLOUD_PROJECT:                  k8s-minikube
GOOGLE_CLOUD_PROJECT:            k8s-minikube
CLOUDSDK_CORE_PROJECT:           k8s-minikube
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-945zl (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-945zl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason                   Age                  From                                      Message
----     ------                   ----                 ----                                      -------
Normal   Scheduled                6m                   default-scheduler                         Successfully assigned default/task-pv-pod to addons-20210813200824-393438
Normal   SuccessfulAttachVolume   6m                   attachdetach-controller                   AttachVolume.Attach succeeded for volume "pvc-6bc352b4-44e1-415a-a380-b3e5e2507bd9"
Warning  VolumeConditionAbnormal  5m59s (x10 over 6m)  csi-pv-monitor-agent-hostpath.csi.k8s.io  The volume isn't mounted
Warning  FailedMount              3m57s                kubelet                                   Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-945zl gcp-creds task-pv-storage]: timed out waiting for the condition
Warning  FailedMount              110s (x10 over 6m)   kubelet                                   MountVolume.SetUp failed for volume "gcp-creds" : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
Warning  FailedMount              100s                 kubelet                                   Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[task-pv-storage kube-api-access-945zl gcp-creds]: timed out waiting for the condition
Normal   VolumeConditionNormal    60s (x41 over 5m)    csi-pv-monitor-agent-hostpath.csi.k8s.io  The Volume returns to the healthy state
addons_test.go:544: (dbg) Run:  kubectl --context addons-20210813200824-393438 logs task-pv-pod -n default
addons_test.go:544: (dbg) Non-zero exit: kubectl --context addons-20210813200824-393438 logs task-pv-pod -n default: exit status 1 (82.166709ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
addons_test.go:544: kubectl --context addons-20210813200824-393438 logs task-pv-pod -n default: exit status 1
addons_test.go:545: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210813200824-393438 -n addons-20210813200824-393438
helpers_test.go:245: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200824-393438 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200824-393438 logs -n 25: (1.387485067s)
helpers_test.go:253: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                Args                 |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                               | download-only-20210813200751-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:24 UTC | Fri, 13 Aug 2021 20:08:24 UTC |
	| delete  | -p                                  | download-only-20210813200751-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:24 UTC | Fri, 13 Aug 2021 20:08:24 UTC |
	|         | download-only-20210813200751-393438 |                                     |         |         |                               |                               |
	| delete  | -p                                  | download-only-20210813200751-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:24 UTC | Fri, 13 Aug 2021 20:08:24 UTC |
	|         | download-only-20210813200751-393438 |                                     |         |         |                               |                               |
	| start   | -p                                  | addons-20210813200824-393438        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:08:24 UTC | Fri, 13 Aug 2021 20:11:02 UTC |
	|         | addons-20210813200824-393438        |                                     |         |         |                               |                               |
	|         | --wait=true --memory=4000           |                                     |         |         |                               |                               |
	|         | --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --addons=registry                   |                                     |         |         |                               |                               |
	|         | --addons=metrics-server             |                                     |         |         |                               |                               |
	|         | --addons=olm                        |                                     |         |         |                               |                               |
	|         | --addons=volumesnapshots            |                                     |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver        |                                     |         |         |                               |                               |
	|         | --driver=kvm2                       |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd      |                                     |         |         |                               |                               |
	|         | --addons=ingress                    |                                     |         |         |                               |                               |
	|         | --addons=helm-tiller                |                                     |         |         |                               |                               |
	| -p      | addons-20210813200824-393438        | addons-20210813200824-393438        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:11:16 UTC | Fri, 13 Aug 2021 20:11:29 UTC |
	|         | addons enable gcp-auth --force      |                                     |         |         |                               |                               |
	| -p      | addons-20210813200824-393438        | addons-20210813200824-393438        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:11:34 UTC | Fri, 13 Aug 2021 20:11:35 UTC |
	|         | addons disable metrics-server       |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210813200824-393438        | addons-20210813200824-393438        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:11:46 UTC | Fri, 13 Aug 2021 20:11:46 UTC |
	|         | ssh curl -s http://127.0.0.1/       |                                     |         |         |                               |                               |
	|         | -H 'Host: nginx.example.com'        |                                     |         |         |                               |                               |
	| -p      | addons-20210813200824-393438        | addons-20210813200824-393438        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:11:47 UTC | Fri, 13 Aug 2021 20:11:47 UTC |
	|         | ssh curl -s http://127.0.0.1/       |                                     |         |         |                               |                               |
	|         | -H 'Host: nginx.example.com'        |                                     |         |         |                               |                               |
	| -p      | addons-20210813200824-393438        | addons-20210813200824-393438        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:11:49 UTC | Fri, 13 Aug 2021 20:11:49 UTC |
	|         | ip                                  |                                     |         |         |                               |                               |
	| -p      | addons-20210813200824-393438        | addons-20210813200824-393438        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:11:49 UTC | Fri, 13 Aug 2021 20:11:50 UTC |
	|         | addons disable registry             |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210813200824-393438        | addons-20210813200824-393438        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:12:01 UTC | Fri, 13 Aug 2021 20:12:02 UTC |
	|         | addons disable helm-tiller          |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210813200824-393438        | addons-20210813200824-393438        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:12:04 UTC | Fri, 13 Aug 2021 20:12:15 UTC |
	|         | addons disable gcp-auth             |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210813200824-393438        | addons-20210813200824-393438        | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:11:47 UTC | Fri, 13 Aug 2021 20:12:17 UTC |
	|         | addons disable ingress              |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:08:24
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:08:24.800545  393795 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:08:24.800618  393795 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:24.800622  393795 out.go:311] Setting ErrFile to fd 2...
	I0813 20:08:24.800625  393795 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:24.800718  393795 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:08:24.800987  393795 out.go:305] Setting JSON to false
	I0813 20:08:24.834776  393795 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":3067,"bootTime":1628882238,"procs":136,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:08:24.834895  393795 start.go:121] virtualization: kvm guest
	I0813 20:08:24.837272  393795 out.go:177] * [addons-20210813200824-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:08:24.838705  393795 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:08:24.837395  393795 notify.go:169] Checking for updates...
	I0813 20:08:24.840040  393795 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:08:24.841315  393795 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:08:24.842520  393795 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:08:24.842688  393795 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:08:24.870001  393795 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 20:08:24.870023  393795 start.go:278] selected driver: kvm2
	I0813 20:08:24.870028  393795 start.go:751] validating driver "kvm2" against <nil>
	I0813 20:08:24.870043  393795 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:08:24.870995  393795 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:08:24.871142  393795 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:08:24.881649  393795 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:08:24.881689  393795 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:08:24.881819  393795 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 20:08:24.881839  393795 cni.go:93] Creating CNI manager for ""
	I0813 20:08:24.881845  393795 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:08:24.881850  393795 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 20:08:24.881859  393795 start_flags.go:277] config:
	{Name:addons-20210813200824-393438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813200824-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:24.881969  393795 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:08:24.883708  393795 out.go:177] * Starting control plane node addons-20210813200824-393438 in cluster addons-20210813200824-393438
	I0813 20:08:24.883727  393795 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:08:24.883750  393795 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:08:24.883776  393795 cache.go:56] Caching tarball of preloaded images
	I0813 20:08:24.883886  393795 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:08:24.883901  393795 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:08:24.884151  393795 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/config.json ...
	I0813 20:08:24.884171  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/config.json: {Name:mkb230a820100d96b12ecd4d934e4b554ec0077b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:08:24.884283  393795 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:08:24.884307  393795 start.go:313] acquiring machines lock for addons-20210813200824-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:08:24.884366  393795 start.go:317] acquired machines lock for "addons-20210813200824-393438" in 47.229µs
	I0813 20:08:24.884384  393795 start.go:89] Provisioning new machine with config: &{Name:addons-20210813200824-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.21.3 ClusterName:addons-20210813200824-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:08:24.884430  393795 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 20:08:24.886101  393795 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0813 20:08:24.886199  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:08:24.886266  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:08:24.895639  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45471
	I0813 20:08:24.896024  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:08:24.896520  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:08:24.896543  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:08:24.896919  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:08:24.897106  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetMachineName
	I0813 20:08:24.897243  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:08:24.897388  393795 start.go:160] libmachine.API.Create for "addons-20210813200824-393438" (driver="kvm2")
	I0813 20:08:24.897413  393795 client.go:168] LocalClient.Create starting
	I0813 20:08:24.897441  393795 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:08:25.120617  393795 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:08:25.280845  393795 main.go:130] libmachine: Running pre-create checks...
	I0813 20:08:25.280870  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .PreCreateCheck
	I0813 20:08:25.281275  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetConfigRaw
	I0813 20:08:25.281798  393795 main.go:130] libmachine: Creating machine...
	I0813 20:08:25.281814  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Create
	I0813 20:08:25.281948  393795 main.go:130] libmachine: (addons-20210813200824-393438) Creating KVM machine...
	I0813 20:08:25.284504  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found existing default KVM network
	I0813 20:08:25.285421  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:25.285273  393819 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0000a85c8] misses:0}
	I0813 20:08:25.285456  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:25.285374  393819 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:08:25.309231  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | trying to create private KVM network mk-addons-20210813200824-393438 192.168.39.0/24...
	I0813 20:08:25.523327  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | private KVM network mk-addons-20210813200824-393438 192.168.39.0/24 created
	I0813 20:08:25.523364  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:25.523289  393819 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:08:25.523388  393795 main.go:130] libmachine: (addons-20210813200824-393438) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438 ...
	I0813 20:08:25.523416  393795 main.go:130] libmachine: (addons-20210813200824-393438) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 20:08:25.523518  393795 main.go:130] libmachine: (addons-20210813200824-393438) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 20:08:25.708900  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:25.708765  393819 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa...
	I0813 20:08:25.773951  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:25.773861  393819 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/addons-20210813200824-393438.rawdisk...
	I0813 20:08:25.773977  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Writing magic tar header
	I0813 20:08:25.773999  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Writing SSH key tar header
	I0813 20:08:25.774021  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:25.773973  393819 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438 ...
	I0813 20:08:25.774191  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438
	I0813 20:08:25.774234  393795 main.go:130] libmachine: (addons-20210813200824-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438 (perms=drwx------)
	I0813 20:08:25.774257  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 20:08:25.774294  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:08:25.774306  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337
	I0813 20:08:25.774322  393795 main.go:130] libmachine: (addons-20210813200824-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 20:08:25.774337  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 20:08:25.774357  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Checking permissions on dir: /home/jenkins
	I0813 20:08:25.774368  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Checking permissions on dir: /home
	I0813 20:08:25.774385  393795 main.go:130] libmachine: (addons-20210813200824-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 20:08:25.774401  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Skipping /home - not owner
	I0813 20:08:25.774448  393795 main.go:130] libmachine: (addons-20210813200824-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 20:08:25.774483  393795 main.go:130] libmachine: (addons-20210813200824-393438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 20:08:25.774500  393795 main.go:130] libmachine: (addons-20210813200824-393438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 20:08:25.774514  393795 main.go:130] libmachine: (addons-20210813200824-393438) Creating domain...
	I0813 20:08:25.798328  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:25:a3:63 in network default
	I0813 20:08:25.798839  393795 main.go:130] libmachine: (addons-20210813200824-393438) Ensuring networks are active...
	I0813 20:08:25.798864  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:25.800465  393795 main.go:130] libmachine: (addons-20210813200824-393438) Ensuring network default is active
	I0813 20:08:25.800700  393795 main.go:130] libmachine: (addons-20210813200824-393438) Ensuring network mk-addons-20210813200824-393438 is active
	I0813 20:08:25.801202  393795 main.go:130] libmachine: (addons-20210813200824-393438) Getting domain xml...
	I0813 20:08:25.802844  393795 main.go:130] libmachine: (addons-20210813200824-393438) Creating domain...
	I0813 20:08:26.299858  393795 main.go:130] libmachine: (addons-20210813200824-393438) Waiting to get IP...
	I0813 20:08:26.300602  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:26.300906  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:26.300956  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:26.300902  393819 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 20:08:26.565182  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:26.565500  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:26.565525  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:26.565454  393819 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 20:08:26.947958  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:26.948312  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:26.948343  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:26.948256  393819 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 20:08:27.372761  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:27.373105  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:27.373142  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:27.373034  393819 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 20:08:27.847536  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:27.847872  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:27.847902  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:27.847817  393819 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 20:08:28.436185  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:28.436478  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:28.436502  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:28.436442  393819 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 20:08:29.272247  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:29.272541  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:29.272575  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:29.272483  393819 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 20:08:30.020819  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:31.892510  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:31.892562  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:30.021036  393819 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 20:08:31.892579  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:31.892589  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:31.892600  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:31.010439  393819 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 20:08:32.201692  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:32.202033  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:32.202075  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:32.201951  393819 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 20:08:33.880215  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:33.880538  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:33.880569  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:33.880499  393819 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 20:08:36.228523  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:40.763772  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:40.763820  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:36.228797  393819 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 20:08:40.763838  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:40.763850  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find current IP address of domain addons-20210813200824-393438 in network mk-addons-20210813200824-393438
	I0813 20:08:40.763858  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | I0813 20:08:39.599354  393819 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 20:08:42.720826  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:42.721189  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has current primary IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:42.721222  393795 main.go:130] libmachine: (addons-20210813200824-393438) Found IP for machine: 192.168.39.71
	I0813 20:08:42.721238  393795 main.go:130] libmachine: (addons-20210813200824-393438) Reserving static IP address...
	I0813 20:08:42.721527  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | unable to find host DHCP lease matching {name: "addons-20210813200824-393438", mac: "52:54:00:1a:a8:f0", ip: "192.168.39.71"} in network mk-addons-20210813200824-393438
	I0813 20:08:42.766188  393795 main.go:130] libmachine: (addons-20210813200824-393438) Reserved static IP address: 192.168.39.71
	I0813 20:08:42.766225  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Getting to WaitForSSH function...
	I0813 20:08:42.766236  393795 main.go:130] libmachine: (addons-20210813200824-393438) Waiting for SSH to be available...
	I0813 20:08:42.770742  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:42.771100  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:42.771131  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:42.771238  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Using SSH client type: external
	I0813 20:08:42.771266  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa (-rw-------)
	I0813 20:08:42.771310  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:08:42.771331  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | About to run SSH command:
	I0813 20:08:42.771344  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | exit 0
	I0813 20:08:42.905708  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 20:08:42.906070  393795 main.go:130] libmachine: (addons-20210813200824-393438) KVM machine creation complete!
	I0813 20:08:42.906139  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetConfigRaw
	I0813 20:08:42.906751  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:08:42.906958  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:08:42.907115  393795 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 20:08:42.907137  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetState
	I0813 20:08:42.909531  393795 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 20:08:42.909544  393795 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 20:08:42.909551  393795 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 20:08:42.909558  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:08:42.913824  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:42.914100  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:42.914129  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:42.914206  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:08:42.914436  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:42.914577  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:42.914735  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:08:42.914875  393795 main.go:130] libmachine: Using SSH client type: native
	I0813 20:08:42.915070  393795 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0813 20:08:42.915081  393795 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 20:08:43.029551  393795 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:08:43.029574  393795 main.go:130] libmachine: Detecting the provisioner...
	I0813 20:08:43.029581  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:08:43.034336  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.034613  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.034640  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.034770  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:08:43.034918  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:43.035032  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:43.035147  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:08:43.035306  393795 main.go:130] libmachine: Using SSH client type: native
	I0813 20:08:43.035470  393795 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0813 20:08:43.035482  393795 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 20:08:43.154932  393795 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 20:08:43.155054  393795 main.go:130] libmachine: found compatible host: buildroot
	I0813 20:08:43.155068  393795 main.go:130] libmachine: Provisioning with buildroot...
	I0813 20:08:43.155076  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetMachineName
	I0813 20:08:43.155288  393795 buildroot.go:166] provisioning hostname "addons-20210813200824-393438"
	I0813 20:08:43.155312  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetMachineName
	I0813 20:08:43.155454  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:08:43.160248  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.160525  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.160554  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.160615  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:08:43.160767  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:43.160890  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:43.161014  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:08:43.161208  393795 main.go:130] libmachine: Using SSH client type: native
	I0813 20:08:43.161360  393795 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0813 20:08:43.161378  393795 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210813200824-393438 && echo "addons-20210813200824-393438" | sudo tee /etc/hostname
	I0813 20:08:43.286438  393795 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210813200824-393438
	
	I0813 20:08:43.286470  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:08:43.291714  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.292060  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.292089  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.292267  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:08:43.292449  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:43.292613  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:43.292778  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:08:43.292969  393795 main.go:130] libmachine: Using SSH client type: native
	I0813 20:08:43.293123  393795 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0813 20:08:43.293153  393795 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210813200824-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210813200824-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210813200824-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:08:43.416973  393795 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:08:43.416997  393795 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:08:43.417017  393795 buildroot.go:174] setting up certificates
	I0813 20:08:43.417030  393795 provision.go:83] configureAuth start
	I0813 20:08:43.417049  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetMachineName
	I0813 20:08:43.417253  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetIP
	I0813 20:08:43.421873  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.422220  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.422260  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.422321  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:08:43.426540  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.426839  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.426868  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.426944  393795 provision.go:138] copyHostCerts
	I0813 20:08:43.427010  393795 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:08:43.427140  393795 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:08:43.427199  393795 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:08:43.427247  393795 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.addons-20210813200824-393438 san=[192.168.39.71 192.168.39.71 localhost 127.0.0.1 minikube addons-20210813200824-393438]
	I0813 20:08:43.554559  393795 provision.go:172] copyRemoteCerts
	I0813 20:08:43.554610  393795 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:08:43.554635  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:08:43.559056  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.559317  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.559350  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.559455  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:08:43.559600  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:43.559737  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:08:43.559830  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:08:43.645777  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0813 20:08:43.661302  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:08:43.676456  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:08:43.692999  393795 provision.go:86] duration metric: configureAuth took 275.952288ms
	I0813 20:08:43.693018  393795 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:08:43.693172  393795 config.go:177] Loaded profile config "addons-20210813200824-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:08:43.693196  393795 main.go:130] libmachine: Checking connection to Docker...
	I0813 20:08:43.693211  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetURL
	I0813 20:08:43.696318  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Using libvirt version 3000000
	I0813 20:08:43.700398  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.700674  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.700703  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.700831  393795 main.go:130] libmachine: Docker is up and running!
	I0813 20:08:43.700844  393795 main.go:130] libmachine: Reticulating splines...
	I0813 20:08:43.700851  393795 client.go:171] LocalClient.Create took 18.803432411s
	I0813 20:08:43.700868  393795 start.go:168] duration metric: libmachine.API.Create for "addons-20210813200824-393438" took 18.803481306s
	I0813 20:08:43.700883  393795 start.go:267] post-start starting for "addons-20210813200824-393438" (driver="kvm2")
	I0813 20:08:43.700889  393795 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:08:43.700910  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:08:43.701094  393795 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:08:43.701118  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:08:43.704953  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.705205  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.705240  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.705355  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:08:43.705534  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:43.705666  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:08:43.705761  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:08:43.789685  393795 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:08:43.794102  393795 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:08:43.794122  393795 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:08:43.794186  393795 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:08:43.794223  393795 start.go:270] post-start completed in 93.333859ms
	I0813 20:08:43.794254  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetConfigRaw
	I0813 20:08:43.794756  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetIP
	I0813 20:08:43.799159  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.799440  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.799466  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.799674  393795 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/config.json ...
	I0813 20:08:43.799818  393795 start.go:129] duration metric: createHost completed in 18.915380594s
	I0813 20:08:43.799829  393795 start.go:80] releasing machines lock for "addons-20210813200824-393438", held for 18.915453659s
	I0813 20:08:43.799863  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:08:43.800043  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetIP
	I0813 20:08:43.804111  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.804351  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.804376  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.804493  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:08:43.804646  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:08:43.805041  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:08:43.805267  393795 ssh_runner.go:149] Run: systemctl --version
	I0813 20:08:43.805289  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:08:43.805312  393795 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:08:43.805344  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:08:43.811696  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.811832  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.812045  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.812073  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.812099  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:08:43.812120  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:08:43.812192  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:08:43.812393  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:08:43.812403  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:43.812554  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:08:43.812566  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:08:43.812725  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:08:43.812720  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:08:43.812837  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:08:43.895987  393795 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:08:43.896071  393795 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:08:47.906062  393795 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.009958464s)
	I0813 20:08:47.906190  393795 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 20:08:47.906258  393795 ssh_runner.go:149] Run: which lz4
	I0813 20:08:47.910455  393795 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:08:47.914934  393795 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:08:47.914966  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0813 20:08:51.785606  393795 containerd.go:546] Took 3.875185 seconds to copy over tarball
	I0813 20:08:51.785683  393795 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:08:59.266406  393795 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (7.480697941s)
	I0813 20:08:59.266440  393795 containerd.go:553] Took 7.480799 seconds t extract the tarball
	I0813 20:08:59.266451  393795 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 20:08:59.331752  393795 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:08:59.501887  393795 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:08:59.552920  393795 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:08:59.589032  393795 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:08:59.603064  393795 docker.go:153] disabling docker service ...
	I0813 20:08:59.603123  393795 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:08:59.613338  393795 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:08:59.624260  393795 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:08:59.756397  393795 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:09:00.324502  393795 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:09:00.334994  393795 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:09:00.348430  393795 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:09:00.361639  393795 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:09:00.368531  393795 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:09:00.368595  393795 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:09:00.385237  393795 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:09:00.391592  393795 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:09:00.519828  393795 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:09:03.676080  393795 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (3.156204056s)
	I0813 20:09:03.676120  393795 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:09:03.676184  393795 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:09:03.689613  393795 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 20:09:04.795228  393795 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:09:04.800738  393795 start.go:413] Will wait 60s for crictl version
	I0813 20:09:04.800797  393795 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:09:04.838265  393795 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:09:04.838331  393795 ssh_runner.go:149] Run: containerd --version
	I0813 20:09:04.867615  393795 ssh_runner.go:149] Run: containerd --version
	I0813 20:09:04.901993  393795 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:09:04.902037  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetIP
	I0813 20:09:04.907966  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:04.908358  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:09:04.908388  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:04.908603  393795 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:09:04.915209  393795 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:09:04.927150  393795 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:09:04.927218  393795 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:09:04.961677  393795 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:09:04.961698  393795 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:09:04.961739  393795 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:09:04.991402  393795 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:09:04.991421  393795 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:09:04.991465  393795 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:09:05.026530  393795 cni.go:93] Creating CNI manager for ""
	I0813 20:09:05.026549  393795 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:09:05.026573  393795 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:09:05.026589  393795 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210813200824-393438 NodeName:addons-20210813200824-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:09:05.026794  393795 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "addons-20210813200824-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:09:05.026914  393795 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-20210813200824-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210813200824-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:09:05.026970  393795 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:09:05.034281  393795 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:09:05.034329  393795 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:09:05.041235  393795 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (543 bytes)
	I0813 20:09:05.053001  393795 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:09:05.064464  393795 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I0813 20:09:05.075905  393795 ssh_runner.go:149] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0813 20:09:05.079850  393795 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:09:05.089799  393795 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438 for IP: 192.168.39.71
	I0813 20:09:05.089840  393795 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:09:05.363891  393795 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt ...
	I0813 20:09:05.363919  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt: {Name:mk96a7b146aed67a7ddc77414e0d7ee4a9f5639f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:05.364160  393795 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key ...
	I0813 20:09:05.364177  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key: {Name:mkb28d534ae8896c3c63f31184e051442e741d84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:05.364293  393795 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:09:05.585285  393795 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt ...
	I0813 20:09:05.585315  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt: {Name:mk0af008d0762eedd86b6f4bf2a05829d95a7dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:05.585500  393795 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key ...
	I0813 20:09:05.585517  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key: {Name:mk496073d6e9326b9534a852520dd9a1dfacd8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:05.585655  393795 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.key
	I0813 20:09:05.585667  393795 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt with IP's: []
	I0813 20:09:05.676108  393795 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt ...
	I0813 20:09:05.676136  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: {Name:mkc7e44aa5bb20ac3af4a16850ce3ec836415891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:05.676305  393795 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.key ...
	I0813 20:09:05.676321  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.key: {Name:mk61203d37cf0ac863f5f6a423df792d7e1ea9e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:05.676433  393795 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.key.f4667c0f
	I0813 20:09:05.676444  393795 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.crt.f4667c0f with IP's: [192.168.39.71 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:09:05.807372  393795 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.crt.f4667c0f ...
	I0813 20:09:05.807398  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.crt.f4667c0f: {Name:mkd336c86f26f549d9c200ce36597a89bb6da139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:05.807541  393795 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.key.f4667c0f ...
	I0813 20:09:05.807555  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.key.f4667c0f: {Name:mkf07a59d7356cf2645516afc5686e9070652037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:05.807647  393795 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.crt.f4667c0f -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.crt
	I0813 20:09:05.807719  393795 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.key.f4667c0f -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.key
	I0813 20:09:05.807787  393795 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/proxy-client.key
	I0813 20:09:05.807799  393795 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/proxy-client.crt with IP's: []
	I0813 20:09:06.096866  393795 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/proxy-client.crt ...
	I0813 20:09:06.096908  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/proxy-client.crt: {Name:mkfd8392816cb47f48c4f770332fffb25ddf747f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:06.097125  393795 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/proxy-client.key ...
	I0813 20:09:06.097144  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/proxy-client.key: {Name:mk7c69f37f8cb4970c5fc7ce2777f2c2bae2ecca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:06.097357  393795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:09:06.097403  393795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:09:06.097433  393795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:09:06.097463  393795 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:09:06.098457  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:09:06.115876  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:09:06.132057  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:09:06.147925  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:09:06.164201  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:09:06.180445  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:09:06.196788  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:09:06.212845  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 20:09:06.230237  393795 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:09:06.245702  393795 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:09:06.257408  393795 ssh_runner.go:149] Run: openssl version
	I0813 20:09:06.263142  393795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:09:06.270488  393795 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:09:06.274969  393795 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:09:06.275003  393795 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:09:06.280557  393795 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:09:06.287958  393795 kubeadm.go:390] StartCluster: {Name:addons-20210813200824-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 C
lusterName:addons-20210813200824-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:09:06.288034  393795 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:09:06.288066  393795 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:09:06.320965  393795 cri.go:76] found id: ""
	I0813 20:09:06.321013  393795 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:09:06.328453  393795 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:09:06.335304  393795 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:09:06.342744  393795 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:09:06.342773  393795 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 20:09:06.772726  393795 out.go:204]   - Generating certificates and keys ...
	I0813 20:09:09.464601  393795 out.go:204]   - Booting up control plane ...
	I0813 20:09:25.072144  393795 out.go:204]   - Configuring RBAC rules ...
	I0813 20:09:25.621263  393795 cni.go:93] Creating CNI manager for ""
	I0813 20:09:25.621288  393795 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:09:25.622976  393795 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 20:09:25.623052  393795 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 20:09:25.630424  393795 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 20:09:25.644240  393795 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:09:25.644322  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:25.644344  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=addons-20210813200824-393438 minikube.k8s.io/updated_at=2021_08_13T20_09_25_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:25.664957  393795 ops.go:34] apiserver oom_adj: -16
	I0813 20:09:25.841471  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:26.446306  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:26.946622  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:27.446349  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:27.946278  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:28.445926  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:28.945957  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:29.446607  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:29.945830  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:30.446558  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:30.946354  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:31.446034  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:31.945706  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:32.445838  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:32.946025  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:33.446148  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:33.946140  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:34.446258  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:34.946216  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:35.446132  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:35.945825  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:36.446715  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:36.945790  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:37.445753  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:37.946438  393795 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:09:38.057420  393795 kubeadm.go:985] duration metric: took 12.413153203s to wait for elevateKubeSystemPrivileges.
	I0813 20:09:38.057458  393795 kubeadm.go:392] StartCluster complete in 31.769507081s
	I0813 20:09:38.057491  393795 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:38.057650  393795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:09:38.058158  393795 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:09:38.583680  393795 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210813200824-393438" rescaled to 1
	I0813 20:09:38.583782  393795 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:09:38.585601  393795 out.go:177] * Verifying Kubernetes components...
	I0813 20:09:38.585676  393795 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:09:38.583828  393795 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:09:38.583856  393795 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
	I0813 20:09:38.585893  393795 addons.go:59] Setting volumesnapshots=true in profile "addons-20210813200824-393438"
	I0813 20:09:38.585916  393795 addons.go:135] Setting addon volumesnapshots=true in "addons-20210813200824-393438"
	I0813 20:09:38.585948  393795 host.go:66] Checking if "addons-20210813200824-393438" exists ...
	I0813 20:09:38.586330  393795 addons.go:59] Setting ingress=true in profile "addons-20210813200824-393438"
	I0813 20:09:38.586356  393795 addons.go:135] Setting addon ingress=true in "addons-20210813200824-393438"
	I0813 20:09:38.586391  393795 host.go:66] Checking if "addons-20210813200824-393438" exists ...
	I0813 20:09:38.586498  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.586554  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.586738  393795 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210813200824-393438"
	I0813 20:09:38.586758  393795 addons.go:59] Setting olm=true in profile "addons-20210813200824-393438"
	I0813 20:09:38.586767  393795 addons.go:59] Setting default-storageclass=true in profile "addons-20210813200824-393438"
	I0813 20:09:38.586786  393795 addons.go:59] Setting metrics-server=true in profile "addons-20210813200824-393438"
	I0813 20:09:38.586787  393795 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210813200824-393438"
	I0813 20:09:38.586797  393795 addons.go:135] Setting addon metrics-server=true in "addons-20210813200824-393438"
	I0813 20:09:38.586804  393795 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210813200824-393438"
	I0813 20:09:38.586819  393795 host.go:66] Checking if "addons-20210813200824-393438" exists ...
	I0813 20:09:38.586825  393795 host.go:66] Checking if "addons-20210813200824-393438" exists ...
	I0813 20:09:38.586834  393795 addons.go:59] Setting helm-tiller=true in profile "addons-20210813200824-393438"
	I0813 20:09:38.586845  393795 addons.go:135] Setting addon helm-tiller=true in "addons-20210813200824-393438"
	I0813 20:09:38.586868  393795 host.go:66] Checking if "addons-20210813200824-393438" exists ...
	I0813 20:09:38.587221  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.586776  393795 addons.go:135] Setting addon olm=true in "addons-20210813200824-393438"
	I0813 20:09:38.587237  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.587243  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.586824  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.587275  393795 addons.go:59] Setting registry=true in profile "addons-20210813200824-393438"
	I0813 20:09:38.587278  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.587278  393795 addons.go:59] Setting storage-provisioner=true in profile "addons-20210813200824-393438"
	I0813 20:09:38.587286  393795 addons.go:135] Setting addon registry=true in "addons-20210813200824-393438"
	I0813 20:09:38.587292  393795 addons.go:135] Setting addon storage-provisioner=true in "addons-20210813200824-393438"
	W0813 20:09:38.587301  393795 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:09:38.587316  393795 host.go:66] Checking if "addons-20210813200824-393438" exists ...
	I0813 20:09:38.587327  393795 host.go:66] Checking if "addons-20210813200824-393438" exists ...
	I0813 20:09:38.587232  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.587428  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.587290  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.587262  393795 host.go:66] Checking if "addons-20210813200824-393438" exists ...
	I0813 20:09:38.587717  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.587745  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.584019  393795 config.go:177] Loaded profile config "addons-20210813200824-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:09:38.587263  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.587270  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.587690  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.588061  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.588126  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.588166  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.599327  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0813 20:09:38.599340  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0813 20:09:38.599894  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.599944  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.600483  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.600493  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.600511  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.600512  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.600902  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.600953  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.601519  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.601521  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.601567  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.601579  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.608399  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36339
	I0813 20:09:38.608473  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0813 20:09:38.610658  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.610966  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.613738  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.613766  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.613786  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.613840  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.615406  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40933
	I0813 20:09:38.615583  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.615582  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.615910  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.616433  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.616474  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.616505  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.616545  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.616688  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46227
	I0813 20:09:38.616738  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.616759  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.617287  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.622516  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42275
	I0813 20:09:38.626113  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39883
	I0813 20:09:38.628255  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37519
	I0813 20:09:38.634699  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40233
	I0813 20:09:38.643508  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.643538  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.643611  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.643750  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.643804  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.643903  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.644061  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.644079  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.644468  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.644555  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.644563  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.644572  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.644576  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.644589  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.644591  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.644934  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.645073  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.644989  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.645021  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.645189  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.645075  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.645356  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetState
	I0813 20:09:38.645449  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetState
	I0813 20:09:38.645599  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.645623  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.645655  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.645660  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.646385  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.647005  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.647041  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.649051  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:09:38.651353  393795 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0813 20:09:38.651440  393795 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0813 20:09:38.651452  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0813 20:09:38.651467  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:09:38.657406  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0813 20:09:38.657776  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.657905  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.658353  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:09:38.658392  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.658474  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.658495  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.658891  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.658953  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:09:38.659075  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:09:38.659207  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetState
	I0813 20:09:38.659253  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:09:38.659298  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37571
	I0813 20:09:38.659561  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:09:38.659689  393795 addons.go:135] Setting addon default-storageclass=true in "addons-20210813200824-393438"
	W0813 20:09:38.659706  393795 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:09:38.659734  393795 host.go:66] Checking if "addons-20210813200824-393438" exists ...
	I0813 20:09:38.659820  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.660124  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.660170  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.660237  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.660256  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.660596  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.660748  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetState
	I0813 20:09:38.662213  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:09:38.664053  393795 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 20:09:38.663704  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:09:38.665661  393795 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 20:09:38.666969  393795 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0813 20:09:38.668442  393795 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0813 20:09:38.669991  393795 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0813 20:09:38.667263  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36063
	I0813 20:09:38.667955  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44991
	I0813 20:09:38.668516  393795 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0813 20:09:38.670474  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.671395  393795 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0813 20:09:38.672813  393795 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0813 20:09:38.671520  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0813 20:09:38.672861  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:09:38.672219  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.674410  393795 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0813 20:09:38.672343  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.674294  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.676357  393795 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0813 20:09:38.676375  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.677765  393795 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0813 20:09:38.676359  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.676821  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.679091  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34211
	I0813 20:09:38.679311  393795 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0813 20:09:38.680774  393795 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0813 20:09:38.680832  393795 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0813 20:09:38.679711  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0813 20:09:38.680847  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0813 20:09:38.680866  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:09:38.679716  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetState
	I0813 20:09:38.679747  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.679904  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.679956  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.681034  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:09:38.681063  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.680591  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:09:38.681221  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:09:38.681411  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:09:38.681515  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.681533  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.681926  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.682413  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.682418  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetState
	I0813 20:09:38.682433  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:09:38.682458  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37371
	I0813 20:09:38.682628  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetState
	I0813 20:09:38.682895  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.683072  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.683089  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.683458  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.683476  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.683485  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.683875  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.683893  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0813 20:09:38.683913  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetState
	I0813 20:09:38.684103  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetState
	I0813 20:09:38.684438  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.684959  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.684976  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.685495  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.686043  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:09:38.686108  393795 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:09:38.686149  393795 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:09:38.688062  393795 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0813 20:09:38.688182  393795 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 20:09:38.688196  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 20:09:38.688240  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:09:38.688927  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:09:38.690965  393795 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0813 20:09:38.689497  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:09:38.689897  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:09:38.692351  393795 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0813 20:09:38.690466  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:09:38.693902  393795 out.go:177]   - Using image registry:2.7.1
	I0813 20:09:38.695194  393795 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:09:38.691513  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.695301  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:09:38.695312  393795 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:09:38.695323  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:09:38.695329  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.692022  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:09:38.695343  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:09:38.696913  393795 out.go:177]   - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	I0813 20:09:38.698339  393795 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0813 20:09:38.695132  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.698469  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:09:38.698487  393795 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0813 20:09:38.698498  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.695513  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:09:38.698502  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0813 20:09:38.698561  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:09:38.695710  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:09:38.697004  393795 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0813 20:09:38.698637  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
	I0813 20:09:38.698652  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:09:38.698759  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:09:38.698930  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:09:38.699738  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:09:38.699919  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:09:38.700090  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:09:38.700783  393795 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0813 20:09:38.701204  393795 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:09:38.701787  393795 main.go:130] libmachine: Using API Version  1
	I0813 20:09:38.701805  393795 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:09:38.701827  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.702203  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:09:38.702231  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.702306  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:09:38.702381  393795 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:09:38.702580  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:09:38.702583  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetState
	I0813 20:09:38.702749  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:09:38.702954  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:09:38.706057  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.707097  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:09:38.707122  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.707318  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:09:38.707490  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:09:38.707584  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .DriverName
	I0813 20:09:38.707665  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:09:38.707776  393795 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:09:38.707786  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:09:38.707801  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:09:38.707838  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:09:38.708166  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.708589  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:09:38.708620  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.708788  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:09:38.708962  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:09:38.709122  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:09:38.709354  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:09:38.713619  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.715440  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:09:38.715473  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.715514  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:09:38.715673  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:09:38.715769  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:09:38.715904  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:09:38.717384  393795 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0813 20:09:38.717419  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0813 20:09:38.717441  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHHostname
	I0813 20:09:38.722655  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.723050  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:a8:f0", ip: ""} in network mk-addons-20210813200824-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:08:39 +0000 UTC Type:0 Mac:52:54:00:1a:a8:f0 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-20210813200824-393438 Clientid:01:52:54:00:1a:a8:f0}
	I0813 20:09:38.723082  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | domain addons-20210813200824-393438 has defined IP address 192.168.39.71 and MAC address 52:54:00:1a:a8:f0 in network mk-addons-20210813200824-393438
	I0813 20:09:38.723242  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHPort
	I0813 20:09:38.723412  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHKeyPath
	I0813 20:09:38.723604  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .GetSSHUsername
	I0813 20:09:38.723771  393795 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/addons-20210813200824-393438/id_rsa Username:docker}
	I0813 20:09:38.883443  393795 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0813 20:09:38.883466  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0813 20:09:38.959885  393795 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0813 20:09:38.959907  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0813 20:09:38.993909  393795 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0813 20:09:38.993934  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0813 20:09:39.031522  393795 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0813 20:09:39.031543  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0813 20:09:39.064037  393795 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0813 20:09:39.064055  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0813 20:09:39.106028  393795 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0813 20:09:39.106056  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0813 20:09:39.145373  393795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:09:39.156561  393795 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0813 20:09:39.156587  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0813 20:09:39.166653  393795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0813 20:09:39.166974  393795 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0813 20:09:39.166997  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0813 20:09:39.173293  393795 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0813 20:09:39.173315  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0813 20:09:39.181574  393795 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0813 20:09:39.181595  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0813 20:09:39.209215  393795 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 20:09:39.209236  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0813 20:09:39.257097  393795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:09:39.278863  393795 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0813 20:09:39.278890  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0813 20:09:39.297262  393795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 20:09:39.310095  393795 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0813 20:09:39.310115  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0813 20:09:39.317291  393795 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0813 20:09:39.317311  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0813 20:09:39.317891  393795 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0813 20:09:39.317908  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0813 20:09:39.365267  393795 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 20:09:39.365289  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 20:09:39.382927  393795 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 20:09:39.382945  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0813 20:09:39.405292  393795 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0813 20:09:39.405311  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0813 20:09:39.441402  393795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0813 20:09:39.455719  393795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0813 20:09:39.654504  393795 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:09:39.654538  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 20:09:39.671707  393795 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0813 20:09:39.671730  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0813 20:09:39.680358  393795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 20:09:39.762701  393795 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0813 20:09:39.762758  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0813 20:09:39.826196  393795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 20:09:40.018251  393795 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (1.43251719s)
	I0813 20:09:40.018350  393795 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.432536411s)
	I0813 20:09:40.018500  393795 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:09:40.020116  393795 node_ready.go:35] waiting up to 6m0s for node "addons-20210813200824-393438" to be "Ready" ...
	I0813 20:09:40.025781  393795 node_ready.go:49] node "addons-20210813200824-393438" has status "Ready":"True"
	I0813 20:09:40.025799  393795 node_ready.go:38] duration metric: took 5.661267ms waiting for node "addons-20210813200824-393438" to be "Ready" ...
	I0813 20:09:40.025807  393795 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:09:40.037290  393795 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-2tfm4" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:40.289237  393795 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0813 20:09:40.289263  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0813 20:09:40.464340  393795 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0813 20:09:40.464366  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0813 20:09:40.786713  393795 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0813 20:09:40.786741  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0813 20:09:40.989824  393795 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0813 20:09:40.989855  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0813 20:09:41.120250  393795 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0813 20:09:41.120276  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0813 20:09:41.289338  393795 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 20:09:41.289378  393795 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0813 20:09:41.524570  393795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 20:09:42.079421  393795 pod_ready.go:102] pod "coredns-558bd4d5db-2tfm4" in "kube-system" namespace has status "Ready":"False"
	I0813 20:09:43.501421  393795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.356001422s)
	I0813 20:09:43.501499  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:43.501519  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:43.501814  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:43.501865  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:43.501882  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:43.501898  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:43.502173  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:43.502201  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:43.502234  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:44.115338  393795 pod_ready.go:102] pod "coredns-558bd4d5db-2tfm4" in "kube-system" namespace has status "Ready":"False"
	I0813 20:09:46.310798  393795 pod_ready.go:102] pod "coredns-558bd4d5db-2tfm4" in "kube-system" namespace has status "Ready":"False"
	I0813 20:09:46.497112  393795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (7.330408833s)
	I0813 20:09:46.497158  393795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.240024175s)
	I0813 20:09:46.497172  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:46.497193  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:46.497208  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:46.497237  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:46.497505  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:46.497522  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:46.497526  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:46.497532  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:46.497544  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:46.497527  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:46.497556  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:46.497566  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:46.497545  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:46.497756  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:46.497770  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:46.497781  393795 addons.go:313] Verifying addon ingress=true in "addons-20210813200824-393438"
	I0813 20:09:46.499409  393795 out.go:177] * Verifying ingress addon...
	I0813 20:09:46.498031  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:46.499507  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:46.499520  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:46.499526  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:46.498069  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:46.499718  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:46.499744  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:46.499753  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:46.501202  393795 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0813 20:09:46.616926  393795 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0813 20:09:46.616952  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:47.174883  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:47.575472  393795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.134035979s)
	I0813 20:09:47.575528  393795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (8.278237583s)
	W0813 20:09:47.575565  393795 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 20:09:47.575572  393795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.119820239s)
	I0813 20:09:47.575610  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:47.575582  393795 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 20:09:47.575624  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:47.575527  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:47.575700  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:47.575759  393795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.895360884s)
	W0813 20:09:47.575853  393795 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 20:09:47.575905  393795 retry.go:31] will retry after 291.140013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 20:09:47.575918  393795 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.557401362s)
	I0813 20:09:47.575961  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:47.575976  393795 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 20:09:47.575917  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:47.575995  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:47.575872  393795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.749639094s)
	I0813 20:09:47.576005  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:47.576015  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:47.576019  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:47.576025  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:47.576028  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:47.576037  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:47.575999  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:47.576080  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:47.576089  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:47.577766  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:47.577774  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:47.577781  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:47.577828  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:47.577782  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:47.577842  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:47.577871  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:47.577888  393795 addons.go:313] Verifying addon registry=true in "addons-20210813200824-393438"
	I0813 20:09:47.577849  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:47.577985  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:47.577995  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:47.577796  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:47.579721  393795 out.go:177] * Verifying registry addon...
	I0813 20:09:47.578189  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:47.579814  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:47.579831  393795 addons.go:313] Verifying addon metrics-server=true in "addons-20210813200824-393438"
	I0813 20:09:47.578253  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:47.581527  393795 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0813 20:09:47.612135  393795 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0813 20:09:47.612155  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:47.646856  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:47.867719  393795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 20:09:47.936634  393795 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 20:09:48.128796  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:48.140721  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:48.639735  393795 pod_ready.go:102] pod "coredns-558bd4d5db-2tfm4" in "kube-system" namespace has status "Ready":"False"
	I0813 20:09:48.790278  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:48.790855  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:49.245523  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:49.249658  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:49.525356  393795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.000707313s)
	I0813 20:09:49.525446  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:49.525462  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:49.525774  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:49.525795  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:49.525813  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:49.525814  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:49.525824  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:49.526056  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:49.526073  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:49.526084  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:49.526084  393795 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210813200824-393438"
	I0813 20:09:49.528027  393795 out.go:177] * Verifying csi-hostpath-driver addon...
	I0813 20:09:49.529626  393795 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0813 20:09:49.540617  393795 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0813 20:09:49.540635  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:49.630507  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:49.630609  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:50.066907  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:50.093522  393795 pod_ready.go:92] pod "coredns-558bd4d5db-2tfm4" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:50.093549  393795 pod_ready.go:81] duration metric: took 10.056233221s waiting for pod "coredns-558bd4d5db-2tfm4" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.093559  393795 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-nvvhj" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.097783  393795 pod_ready.go:97] error getting pod "coredns-558bd4d5db-nvvhj" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-nvvhj" not found
	I0813 20:09:50.097805  393795 pod_ready.go:81] duration metric: took 4.239831ms waiting for pod "coredns-558bd4d5db-nvvhj" in "kube-system" namespace to be "Ready" ...
	E0813 20:09:50.097818  393795 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-nvvhj" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-nvvhj" not found
	I0813 20:09:50.097831  393795 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210813200824-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.105177  393795 pod_ready.go:92] pod "etcd-addons-20210813200824-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:50.105193  393795 pod_ready.go:81] duration metric: took 7.352925ms waiting for pod "etcd-addons-20210813200824-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.105201  393795 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210813200824-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.111294  393795 pod_ready.go:92] pod "kube-apiserver-addons-20210813200824-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:50.111310  393795 pod_ready.go:81] duration metric: took 6.101978ms waiting for pod "kube-apiserver-addons-20210813200824-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.111320  393795 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210813200824-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.121666  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:50.123409  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:50.125829  393795 pod_ready.go:92] pod "kube-controller-manager-addons-20210813200824-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:50.125851  393795 pod_ready.go:81] duration metric: took 14.523533ms waiting for pod "kube-controller-manager-addons-20210813200824-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.125864  393795 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tz56r" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.271429  393795 pod_ready.go:92] pod "kube-proxy-tz56r" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:50.271450  393795 pod_ready.go:81] duration metric: took 145.574615ms waiting for pod "kube-proxy-tz56r" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.271461  393795 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210813200824-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.552633  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:50.618655  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:50.620974  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:50.669835  393795 pod_ready.go:92] pod "kube-scheduler-addons-20210813200824-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 20:09:50.669855  393795 pod_ready.go:81] duration metric: took 398.386493ms waiting for pod "kube-scheduler-addons-20210813200824-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:09:50.669865  393795 pod_ready.go:38] duration metric: took 10.644047612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:09:50.669885  393795 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:09:50.669931  393795 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:09:51.053820  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:51.119257  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:51.125096  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:51.361875  393795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.494086135s)
	I0813 20:09:51.361942  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:51.361956  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:51.362251  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:51.362304  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:51.362318  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:51.362374  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:51.362390  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:51.362606  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:51.362656  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:51.362686  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:51.549635  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:51.627038  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:51.627214  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:52.046032  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:52.140128  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:52.140440  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:52.616678  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:52.687974  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:52.705638  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:52.955125  393795 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (5.018448349s)
	I0813 20:09:52.955165  393795 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.285214304s)
	I0813 20:09:52.955186  393795 api_server.go:70] duration metric: took 14.371365473s to wait for apiserver process to appear ...
	I0813 20:09:52.955194  393795 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:09:52.955192  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:52.955205  393795 api_server.go:239] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0813 20:09:52.955225  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:52.955612  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:52.955637  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:52.955649  393795 main.go:130] libmachine: Making call to close driver server
	I0813 20:09:52.955665  393795 main.go:130] libmachine: (addons-20210813200824-393438) Calling .Close
	I0813 20:09:52.955693  393795 main.go:130] libmachine: (addons-20210813200824-393438) DBG | Closing plugin on server side
	I0813 20:09:52.955913  393795 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:09:52.955930  393795 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:09:52.964908  393795 api_server.go:265] https://192.168.39.71:8443/healthz returned 200:
	ok
	I0813 20:09:52.965928  393795 api_server.go:139] control plane version: v1.21.3
	I0813 20:09:52.965944  393795 api_server.go:129] duration metric: took 10.744185ms to wait for apiserver health ...
	I0813 20:09:52.965952  393795 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:09:52.987760  393795 system_pods.go:59] 18 kube-system pods found
	I0813 20:09:52.987805  393795 system_pods.go:61] "coredns-558bd4d5db-2tfm4" [8c83bfa0-cf21-4b94-ae75-e9ec1ec3bea9] Running
	I0813 20:09:52.987813  393795 system_pods.go:61] "csi-hostpath-attacher-0" [693a4034-9d0d-48fa-94af-c786d7758597] Pending
	I0813 20:09:52.987819  393795 system_pods.go:61] "csi-hostpath-provisioner-0" [afce731b-2eb1-412b-9091-2dc0a1a021a9] Pending
	I0813 20:09:52.987829  393795 system_pods.go:61] "csi-hostpath-resizer-0" [d54d1c38-b5ed-4549-be19-af03c3f20316] Pending
	I0813 20:09:52.987841  393795 system_pods.go:61] "csi-hostpath-snapshotter-0" [c90067b7-84ec-447b-b688-d02a0e676b7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0813 20:09:52.987855  393795 system_pods.go:61] "csi-hostpathplugin-0" [a311fddd-ed33-4f65-a513-2dfef644826e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0813 20:09:52.987867  393795 system_pods.go:61] "etcd-addons-20210813200824-393438" [a43d3749-089e-4b6d-ac9f-20d2efd0a1fa] Running
	I0813 20:09:52.987877  393795 system_pods.go:61] "kube-apiserver-addons-20210813200824-393438" [d28ca15b-5eae-4c4e-b9c3-0c0622ed4585] Running
	I0813 20:09:52.987886  393795 system_pods.go:61] "kube-controller-manager-addons-20210813200824-393438" [1f7b8d18-b800-46c4-9075-2dc0f88979d5] Running
	I0813 20:09:52.987892  393795 system_pods.go:61] "kube-proxy-tz56r" [fcbba51d-011a-43d5-b34b-880cf73c8792] Running
	I0813 20:09:52.987902  393795 system_pods.go:61] "kube-scheduler-addons-20210813200824-393438" [606cb4e1-11f1-42ee-8986-851f91051a62] Running
	I0813 20:09:52.987910  393795 system_pods.go:61] "metrics-server-77c99ccb96-9567z" [f3e2b887-96a1-4798-862b-2cce165f6e71] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:09:52.987922  393795 system_pods.go:61] "registry-5svq5" [e46e7721-b1fa-4ead-b2e2-c7e63d60991a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 20:09:52.987936  393795 system_pods.go:61] "registry-proxy-tqczv" [753222c4-f8c8-4036-aaaf-76542b374291] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 20:09:52.987950  393795 system_pods.go:61] "snapshot-controller-989f9ddc8-n5hps" [a734c119-4bbf-41d1-878e-220f971d4eb5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:09:52.987964  393795 system_pods.go:61] "snapshot-controller-989f9ddc8-nbw4s" [0a5681bf-2ab3-4d43-89f4-979100af9d85] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:09:52.987977  393795 system_pods.go:61] "storage-provisioner" [18c67738-f79c-43e7-8251-6887d8b8a3c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:09:52.987991  393795 system_pods.go:61] "tiller-deploy-768d69497-dhd7f" [b66f31a9-eec4-4952-a7d1-5e6338a26140] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0813 20:09:52.988001  393795 system_pods.go:74] duration metric: took 22.044156ms to wait for pod list to return data ...
	I0813 20:09:52.988011  393795 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:09:53.022489  393795 default_sa.go:45] found service account: "default"
	I0813 20:09:53.022516  393795 default_sa.go:55] duration metric: took 34.494493ms for default service account to be created ...
	I0813 20:09:53.022528  393795 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:09:53.037424  393795 system_pods.go:86] 18 kube-system pods found
	I0813 20:09:53.037449  393795 system_pods.go:89] "coredns-558bd4d5db-2tfm4" [8c83bfa0-cf21-4b94-ae75-e9ec1ec3bea9] Running
	I0813 20:09:53.037455  393795 system_pods.go:89] "csi-hostpath-attacher-0" [693a4034-9d0d-48fa-94af-c786d7758597] Pending
	I0813 20:09:53.037460  393795 system_pods.go:89] "csi-hostpath-provisioner-0" [afce731b-2eb1-412b-9091-2dc0a1a021a9] Pending
	I0813 20:09:53.037463  393795 system_pods.go:89] "csi-hostpath-resizer-0" [d54d1c38-b5ed-4549-be19-af03c3f20316] Pending
	I0813 20:09:53.037474  393795 system_pods.go:89] "csi-hostpath-snapshotter-0" [c90067b7-84ec-447b-b688-d02a0e676b7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0813 20:09:53.037484  393795 system_pods.go:89] "csi-hostpathplugin-0" [a311fddd-ed33-4f65-a513-2dfef644826e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0813 20:09:53.037493  393795 system_pods.go:89] "etcd-addons-20210813200824-393438" [a43d3749-089e-4b6d-ac9f-20d2efd0a1fa] Running
	I0813 20:09:53.037501  393795 system_pods.go:89] "kube-apiserver-addons-20210813200824-393438" [d28ca15b-5eae-4c4e-b9c3-0c0622ed4585] Running
	I0813 20:09:53.037508  393795 system_pods.go:89] "kube-controller-manager-addons-20210813200824-393438" [1f7b8d18-b800-46c4-9075-2dc0f88979d5] Running
	I0813 20:09:53.037514  393795 system_pods.go:89] "kube-proxy-tz56r" [fcbba51d-011a-43d5-b34b-880cf73c8792] Running
	I0813 20:09:53.037521  393795 system_pods.go:89] "kube-scheduler-addons-20210813200824-393438" [606cb4e1-11f1-42ee-8986-851f91051a62] Running
	I0813 20:09:53.037528  393795 system_pods.go:89] "metrics-server-77c99ccb96-9567z" [f3e2b887-96a1-4798-862b-2cce165f6e71] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 20:09:53.037533  393795 system_pods.go:89] "registry-5svq5" [e46e7721-b1fa-4ead-b2e2-c7e63d60991a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 20:09:53.037542  393795 system_pods.go:89] "registry-proxy-tqczv" [753222c4-f8c8-4036-aaaf-76542b374291] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 20:09:53.037549  393795 system_pods.go:89] "snapshot-controller-989f9ddc8-n5hps" [a734c119-4bbf-41d1-878e-220f971d4eb5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:09:53.037562  393795 system_pods.go:89] "snapshot-controller-989f9ddc8-nbw4s" [0a5681bf-2ab3-4d43-89f4-979100af9d85] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 20:09:53.037572  393795 system_pods.go:89] "storage-provisioner" [18c67738-f79c-43e7-8251-6887d8b8a3c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 20:09:53.037581  393795 system_pods.go:89] "tiller-deploy-768d69497-dhd7f" [b66f31a9-eec4-4952-a7d1-5e6338a26140] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0813 20:09:53.037592  393795 system_pods.go:126] duration metric: took 15.057891ms to wait for k8s-apps to be running ...
	I0813 20:09:53.037601  393795 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:09:53.037650  393795 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:09:53.056761  393795 system_svc.go:56] duration metric: took 19.155194ms WaitForService to wait for kubelet.
	I0813 20:09:53.056786  393795 kubeadm.go:547] duration metric: took 14.47296686s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:09:53.056805  393795 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:09:53.071698  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:53.082121  393795 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:09:53.082146  393795 node_conditions.go:123] node cpu capacity is 2
	I0813 20:09:53.082161  393795 node_conditions.go:105] duration metric: took 25.351763ms to run NodePressure ...
	I0813 20:09:53.082170  393795 start.go:231] waiting for startup goroutines ...
	I0813 20:09:53.135484  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:53.135831  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:53.546753  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:53.617725  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:53.624343  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:54.107444  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:54.126823  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:54.134995  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:54.551048  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:54.620421  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:54.624469  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:55.047334  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:55.119390  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:55.124655  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:55.559762  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:55.618528  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:55.628164  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:56.052643  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:56.121580  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:56.123711  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:56.549004  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:56.618048  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:56.623479  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:57.055925  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:57.120034  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:57.123256  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:57.548168  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:57.617559  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:57.620927  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:58.048353  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:58.117159  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:58.120471  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:58.549091  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:58.616699  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:58.620868  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:59.056972  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:59.116982  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:59.124654  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:09:59.547139  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:09:59.624786  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:09:59.628304  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:00.047495  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:00.119540  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:00.122983  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:00.547976  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:00.617461  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:00.621086  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:01.065738  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:01.118920  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:01.122088  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:01.573288  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:01.623421  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:01.625722  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:02.046315  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:02.117886  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:02.121486  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:02.551665  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:02.624306  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:02.625820  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:03.049929  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:03.117277  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:03.121064  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:03.547627  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:03.624732  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:03.627066  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:04.049679  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:04.119291  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:04.121497  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:04.552850  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:04.638494  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:04.639397  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:05.059369  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:05.138758  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:05.150516  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:05.570407  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:05.628272  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:05.629076  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:06.110391  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:06.118172  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:06.141644  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:06.554002  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:06.628088  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:06.637241  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:07.050356  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:07.141412  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:07.141566  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:07.546967  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:07.616357  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:07.627367  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:08.102318  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:08.291583  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:08.291600  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:08.550319  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:08.620560  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:08.630347  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:09.078463  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:09.118587  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:09.121381  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:09.552227  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:09.623455  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:09.624643  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:10.048268  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:10.123065  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:10.124537  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:10.547376  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:10.617884  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:10.621395  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:11.047193  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:11.117134  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:11.120316  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:11.584369  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:11.621816  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:11.625581  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:12.047882  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:12.122235  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:12.133256  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:12.549914  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:12.617136  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:12.620243  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:13.047543  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:13.117218  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:13.120838  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:13.545911  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:13.616978  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:13.620547  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:14.053543  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:14.122347  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:14.131774  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:14.547187  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:14.617079  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:14.620325  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:15.047240  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:15.121756  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:15.123510  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:15.548100  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:15.624285  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:15.627983  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:16.054580  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:16.122976  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:16.126345  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:16.567087  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:16.618465  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:16.621630  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:17.047902  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:17.117168  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:17.120538  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:17.552944  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:17.617890  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:17.620642  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:18.048206  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:18.123556  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:18.127644  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:18.545506  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:18.617363  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:18.620329  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:19.047815  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:19.122114  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:19.124139  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:19.552620  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:19.629874  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:19.636268  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:20.047059  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:20.117974  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:20.121371  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:20.546295  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:20.616662  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:20.623186  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:21.046137  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:21.117107  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:21.159274  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:21.546465  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:21.617931  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:21.621446  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:22.072576  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:22.119216  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:22.122046  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:22.547531  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:22.618405  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:22.622111  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:23.045636  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:23.117099  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:23.120698  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:23.558909  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:23.621855  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:23.625197  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:24.047281  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:24.117743  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:24.120357  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:24.550480  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:24.618186  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:24.621735  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:25.049506  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:25.117356  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:25.121864  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:25.546533  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:25.620243  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 20:10:25.622547  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:26.049763  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:26.120480  393795 kapi.go:108] duration metric: took 38.538947676s to wait for kubernetes.io/minikube-addons=registry ...
	I0813 20:10:26.123696  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:26.546776  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:26.622303  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:27.048722  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:27.123060  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:27.546444  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:27.629315  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:28.049854  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:28.122398  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:28.554848  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:28.623609  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:29.054207  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:29.120551  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:29.547042  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:29.622329  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:30.053433  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:30.122243  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:30.546149  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:30.620691  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:31.053904  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:31.124986  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:31.560035  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:31.622639  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:32.048338  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:32.125154  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:32.546769  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:32.621242  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:33.047992  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:33.124312  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:33.548871  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:33.622046  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:34.046877  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:34.124528  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:34.547045  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:34.621988  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:35.072168  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:35.363607  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:35.580435  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:35.623141  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:36.051282  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:36.124046  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:36.546063  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:36.622476  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:37.046056  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:37.140020  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:37.547668  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:37.624925  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:38.061397  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:38.121466  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:38.546944  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:38.629856  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:39.063922  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:39.131145  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:39.546633  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:39.622596  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:40.059675  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:40.121599  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:40.549082  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:40.621140  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:41.053747  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:41.122011  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:41.546963  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:41.623777  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:42.047529  393795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 20:10:42.161825  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:42.549339  393795 kapi.go:108] duration metric: took 53.019711233s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0813 20:10:42.622045  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:43.123437  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:43.630980  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:44.121339  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:44.624213  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:45.135220  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:45.621437  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:46.124022  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:46.624309  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:47.124493  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:47.623752  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:48.127263  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:48.622079  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:49.121013  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:49.623664  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:50.121667  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:50.624587  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:51.124748  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:51.625021  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:52.122980  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:52.629321  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:53.149334  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:53.622932  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:54.128086  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:54.625633  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:55.121994  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:55.621928  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:56.122633  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:56.624752  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:57.124200  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:57.623946  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:58.121942  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:58.623073  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:59.125544  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:10:59.627375  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:00.122768  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:00.623232  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:01.491364  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:01.623739  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:02.127761  393795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 20:11:02.625805  393795 kapi.go:108] duration metric: took 1m16.12459342s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0813 20:11:02.628375  393795 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, helm-tiller, metrics-server, volumesnapshots, olm, registry, csi-hostpath-driver, ingress
	I0813 20:11:02.628403  393795 addons.go:344] enableAddons completed in 1m24.044553926s
	I0813 20:11:02.674198  393795 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:11:02.676223  393795 out.go:177] * Done! kubectl is now configured to use "addons-20210813200824-393438" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID
	3e737acc6b66b       4ba8776e69275       5 minutes ago       Running             etcd-restore-operator                    0                   32bebd0dae3c9
	15845df69373b       4ba8776e69275       5 minutes ago       Running             etcd-backup-operator                     0                   32bebd0dae3c9
	3736ef34faf53       4ba8776e69275       5 minutes ago       Running             etcd-operator                            0                   32bebd0dae3c9
	dd0bfd00f90e6       a90209bb39e3d       6 minutes ago       Running             private-image-eu                         0                   3aa03826536d0
	453423188a401       a90209bb39e3d       6 minutes ago       Running             private-image                            0                   8b6a284daf6c1
	194c7dd592dbb       7ce0143dee376       6 minutes ago       Running             nginx                                    0                   986d409374264
	ce3ff0151318a       56cc512116c8f       6 minutes ago       Running             busybox                                  0                   d532013303de2
	c10c8f99522b9       77a8908a12e35       7 minutes ago       Running             liveness-probe                           0                   377e7a5422ca0
	0113aee835ee4       b4d03a87a2f45       7 minutes ago       Running             hostpath                                 0                   377e7a5422ca0
	202d780fb6302       84b0f3f7f6f04       7 minutes ago       Running             node-driver-registrar                    0                   377e7a5422ca0
	9467f62999b15       d544402579747       7 minutes ago       Running             packageserver                            0                   0f9a9e7dd72e2
	672d48d6816c2       d544402579747       7 minutes ago       Running             packageserver                            0                   e9159d08adc3b
	fbf6b328ef674       fa6785e2e7324       7 minutes ago       Running             csi-external-health-monitor-controller   0                   377e7a5422ca0
	e3a767f09210e       656bd6cba647c       7 minutes ago       Running             registry-server                          0                   300bd7c31f5ea
	8312318d18ccc       f1d8a00ae690f       7 minutes ago       Running             volume-snapshot-controller               0                   ed00c7878a261
	ff54a4aa9e9fb       f1d8a00ae690f       7 minutes ago       Running             volume-snapshot-controller               0                   822fadf30e2c1
	bdd13ea6b4cad       a8fe79377034e       7 minutes ago       Running             csi-resizer                              0                   20d847f24454b
	f0668dddd469e       e0d187f105d60       7 minutes ago       Running             csi-provisioner                          0                   9276522329d96
	c6de6b93a33de       da32a49a903a6       7 minutes ago       Running             csi-snapshotter                          0                   6316376c5f8d6
	31b56f038e18f       223c6dea7afe5       8 minutes ago       Running             csi-external-health-monitor-agent        0                   377e7a5422ca0
	0d166fad4b514       03ce9595bf925       8 minutes ago       Running             csi-attacher                             0                   0ddab321ab35c
	36d8294f0e023       d544402579747       8 minutes ago       Running             olm-operator                             0                   e37cdd358d7ce
	c26856ce09abb       d544402579747       8 minutes ago       Running             catalog-operator                         0                   13b7ba0028f38
	cbfa35fa4df5f       6e38f40d628db       8 minutes ago       Running             storage-provisioner                      0                   9ee731368e492
	85e9767d83599       296a6d5035e2d       8 minutes ago       Running             coredns                                  0                   5a6286f1850ad
	87c6001e81327       adb2816ea823a       8 minutes ago       Running             kube-proxy                               0                   415e0f470cff0
	b6934d633e207       6be0dc1302e30       8 minutes ago       Running             kube-scheduler                           0                   e6907de5567d5
	ca4b86da42333       0369cf4303ffd       8 minutes ago       Running             etcd                                     0                   7e401ad660ecb
	0836c7b4d30b4       bc2bb319a7038       8 minutes ago       Running             kube-controller-manager                  0                   0ab19d123dec8
	fc458818370dc       3d174f00aa39e       8 minutes ago       Running             kube-apiserver                           0                   ad8c9d41b134f
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:08:36 UTC, end at Fri 2021-08-13 20:18:05 UTC. --
	Aug 13 20:17:41 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:41.967462852Z" level=info msg="ExecSync for \"e3a767f09210ec0d1307664f83281d48d174b1bcd6c94a81b5e3e59c9b8eac39\" returns with exit code 0"
	Aug 13 20:17:41 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:41.975524545Z" level=info msg="Finish piping \"stderr\" of container exec \"adeafac2f01cc4ebd8040e7926aa53d0a28b3e969c8547781944cbbc91430541\""
	Aug 13 20:17:41 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:41.975546206Z" level=info msg="Finish piping \"stdout\" of container exec \"adeafac2f01cc4ebd8040e7926aa53d0a28b3e969c8547781944cbbc91430541\""
	Aug 13 20:17:41 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:41.975575357Z" level=info msg="Exec process \"adeafac2f01cc4ebd8040e7926aa53d0a28b3e969c8547781944cbbc91430541\" exits with exit code 0 and error <nil>"
	Aug 13 20:17:41 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:41.982199275Z" level=info msg="ExecSync for \"e3a767f09210ec0d1307664f83281d48d174b1bcd6c94a81b5e3e59c9b8eac39\" returns with exit code 0"
	Aug 13 20:17:51 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:51.800898087Z" level=info msg="ExecSync for \"e3a767f09210ec0d1307664f83281d48d174b1bcd6c94a81b5e3e59c9b8eac39\" with command [grpc_health_probe -addr=:50051] and timeout 1 (s)"
	Aug 13 20:17:51 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:51.810911738Z" level=info msg="ExecSync for \"e3a767f09210ec0d1307664f83281d48d174b1bcd6c94a81b5e3e59c9b8eac39\" with command [grpc_health_probe -addr=:50051] and timeout 5 (s)"
	Aug 13 20:17:51 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:51.906850136Z" level=info msg="Finish piping \"stdout\" of container exec \"2b0c855874ff8a01c4d774006d8651d5c4af8aa2eb73b88f1ba2c42a6e6221b1\""
	Aug 13 20:17:51 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:51.906865314Z" level=info msg="Finish piping \"stderr\" of container exec \"2b0c855874ff8a01c4d774006d8651d5c4af8aa2eb73b88f1ba2c42a6e6221b1\""
	Aug 13 20:17:51 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:51.907525697Z" level=info msg="Exec process \"2b0c855874ff8a01c4d774006d8651d5c4af8aa2eb73b88f1ba2c42a6e6221b1\" exits with exit code 0 and error <nil>"
	Aug 13 20:17:51 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:51.990766356Z" level=info msg="ExecSync for \"e3a767f09210ec0d1307664f83281d48d174b1bcd6c94a81b5e3e59c9b8eac39\" returns with exit code 0"
	Aug 13 20:17:52 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:52.005098099Z" level=info msg="Finish piping \"stderr\" of container exec \"2da892567f15878aab76b57a7b0ecee88a7322522ff74206252dc114a92c6b5d\""
	Aug 13 20:17:52 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:52.005246381Z" level=info msg="Exec process \"2da892567f15878aab76b57a7b0ecee88a7322522ff74206252dc114a92c6b5d\" exits with exit code 0 and error <nil>"
	Aug 13 20:17:52 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:52.005687037Z" level=info msg="Finish piping \"stdout\" of container exec \"2da892567f15878aab76b57a7b0ecee88a7322522ff74206252dc114a92c6b5d\""
	Aug 13 20:17:52 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:17:52.010765326Z" level=info msg="ExecSync for \"e3a767f09210ec0d1307664f83281d48d174b1bcd6c94a81b5e3e59c9b8eac39\" returns with exit code 0"
	Aug 13 20:18:01 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:18:01.800926955Z" level=info msg="ExecSync for \"e3a767f09210ec0d1307664f83281d48d174b1bcd6c94a81b5e3e59c9b8eac39\" with command [grpc_health_probe -addr=:50051] and timeout 1 (s)"
	Aug 13 20:18:01 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:18:01.807123125Z" level=info msg="ExecSync for \"e3a767f09210ec0d1307664f83281d48d174b1bcd6c94a81b5e3e59c9b8eac39\" with command [grpc_health_probe -addr=:50051] and timeout 5 (s)"
	Aug 13 20:18:01 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:18:01.887766171Z" level=info msg="Finish piping \"stderr\" of container exec \"1eb444e132f294cb20fcaf6697fc6abad20c166f7813497ba969e7b3691e4416\""
	Aug 13 20:18:01 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:18:01.887940721Z" level=info msg="Exec process \"1eb444e132f294cb20fcaf6697fc6abad20c166f7813497ba969e7b3691e4416\" exits with exit code 0 and error <nil>"
	Aug 13 20:18:01 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:18:01.888108961Z" level=info msg="Finish piping \"stdout\" of container exec \"1eb444e132f294cb20fcaf6697fc6abad20c166f7813497ba969e7b3691e4416\""
	Aug 13 20:18:01 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:18:01.976263944Z" level=info msg="ExecSync for \"e3a767f09210ec0d1307664f83281d48d174b1bcd6c94a81b5e3e59c9b8eac39\" returns with exit code 0"
	Aug 13 20:18:01 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:18:01.985540311Z" level=info msg="Finish piping \"stderr\" of container exec \"2c8b05367a047eace9a95197d735c421fcfc76bc38afaf62e3b653584d606f90\""
	Aug 13 20:18:01 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:18:01.985753315Z" level=info msg="Finish piping \"stdout\" of container exec \"2c8b05367a047eace9a95197d735c421fcfc76bc38afaf62e3b653584d606f90\""
	Aug 13 20:18:01 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:18:01.986778466Z" level=info msg="Exec process \"2c8b05367a047eace9a95197d735c421fcfc76bc38afaf62e3b653584d606f90\" exits with exit code 0 and error <nil>"
	Aug 13 20:18:01 addons-20210813200824-393438 containerd[2187]: time="2021-08-13T20:18:01.990445766Z" level=info msg="ExecSync for \"e3a767f09210ec0d1307664f83281d48d174b1bcd6c94a81b5e3e59c9b8eac39\" returns with exit code 0"
	
	* 
	* ==> coredns [85e9767d83599aa14549b85e80e590141318fd11e219821eb5eacd7a1c6cd258] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210813200824-393438
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20210813200824-393438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=addons-20210813200824-393438
	                    minikube.k8s.io/updated_at=2021_08_13T20_09_25_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210813200824-393438
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210813200824-393438"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:09:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210813200824-393438
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:18:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:17:32 +0000   Fri, 13 Aug 2021 20:09:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:17:32 +0000   Fri, 13 Aug 2021 20:09:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:17:32 +0000   Fri, 13 Aug 2021 20:09:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:17:32 +0000   Fri, 13 Aug 2021 20:09:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    addons-20210813200824-393438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3935016Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3935016Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c110193e39f44e5a537f8de0ea01daf
	  System UUID:                7c110193-e39f-44e5-a537-f8de0ea01daf
	  Boot ID:                    e2e7996e-c1ef-48df-834d-4c91591fce75
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  default                     nginx                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  default                     private-image-7ff9c8c74f-qcb6d                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	  default                     private-image-eu-5956d58f9f-85ktb                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  default                     task-pv-pod                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 coredns-558bd4d5db-2tfm4                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m28s
	  kube-system                 csi-hostpath-attacher-0                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	  kube-system                 csi-hostpath-provisioner-0                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	  kube-system                 csi-hostpath-resizer-0                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	  kube-system                 csi-hostpath-snapshotter-0                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 csi-hostpathplugin-0                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	  kube-system                 etcd-addons-20210813200824-393438                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m35s
	  kube-system                 kube-apiserver-addons-20210813200824-393438             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m43s
	  kube-system                 kube-controller-manager-addons-20210813200824-393438    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m43s
	  kube-system                 kube-proxy-tz56r                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 kube-scheduler-addons-20210813200824-393438             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m42s
	  kube-system                 snapshot-controller-989f9ddc8-n5hps                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 snapshot-controller-989f9ddc8-nbw4s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  my-etcd                     etcd-operator-85cd4f54cd-z9qkn                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  olm                         catalog-operator-75d496484d-x887b                       10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (2%!)(MISSING)        0 (0%!)(MISSING)         8m19s
	  olm                         olm-operator-859c88c96-7j2mx                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m19s
	  olm                         operatorhubio-catalog-n7288                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (1%!)(MISSING)        0 (0%!)(MISSING)         8m5s
	  olm                         packageserver-6f5dccfdd7-4g764                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  olm                         packageserver-6f5dccfdd7-tl766                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                780m (39%!)(MISSING)   0 (0%!)(MISSING)
	  memory             460Mi (11%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m51s (x6 over 8m51s)  kubelet     Node addons-20210813200824-393438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m51s (x5 over 8m51s)  kubelet     Node addons-20210813200824-393438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m51s (x5 over 8m51s)  kubelet     Node addons-20210813200824-393438 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m36s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m35s                  kubelet     Node addons-20210813200824-393438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s                  kubelet     Node addons-20210813200824-393438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s                  kubelet     Node addons-20210813200824-393438 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m29s                  kubelet     Node addons-20210813200824-393438 status is now: NodeReady
	  Normal  Starting                 8m27s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [ +15.840037] systemd-fstab-generator[2791]: Ignoring "noauto" for root device
	[ +14.377305] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.575875] kauditd_printk_skb: 89 callbacks suppressed
	[  +5.088237] kauditd_printk_skb: 155 callbacks suppressed
	[  +5.850593] kauditd_printk_skb: 65 callbacks suppressed
	[Aug13 20:10] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.945146] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.958629] kauditd_printk_skb: 20 callbacks suppressed
	[ +13.544389] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.814485] kauditd_printk_skb: 92 callbacks suppressed
	[  +5.918284] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.075357] NFSD: Unable to end grace period: -110
	[  +7.914343] kauditd_printk_skb: 8 callbacks suppressed
	[Aug13 20:11] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.960229] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.262407] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.911360] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.814317] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.273238] kauditd_printk_skb: 119 callbacks suppressed
	[ +14.132068] kauditd_printk_skb: 161 callbacks suppressed
	[Aug13 20:12] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.004601] kauditd_printk_skb: 185 callbacks suppressed
	[ +12.403977] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.833337] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.353883] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [15845df69373b7ad6d4cc3cd3f5b12ddc44a5ab8399b847bdcd053969ae26098] <==
	* time="2021-08-13T20:12:25Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-13T20:12:25Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-13T20:12:25Z" level=info msg="etcd-backup-operator Version: 0.9.4"
	time="2021-08-13T20:12:25Z" level=info msg="Git SHA: c8a1c64"
	E0813 20:12:25.766733       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-backup-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"790bab1f-bf38-4483-83bd-150235be981e", ResourceVersion:"1956", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764482345, loc:(*time.Location)(0x25824c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-z9qkn\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-13T20:12:25Z\",\"renewTime\":\"2021-08-13T20:12:25Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Wil
l not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-z9qkn became leader'
	time="2021-08-13T20:12:25Z" level=info msg="starting backup controller" pkg=controller
	
	* 
	* ==> etcd [3736ef34faf5386a1e54e3256b7fdc0a58ab296ebc66ed3629bd1b10ce81f0bf] <==
	* time="2021-08-13T20:12:25Z" level=info msg="etcd-operator Version: 0.9.4"
	time="2021-08-13T20:12:25Z" level=info msg="Git SHA: c8a1c64"
	time="2021-08-13T20:12:25Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-13T20:12:25Z" level=info msg="Go OS/Arch: linux/amd64"
	E0813 20:12:25.414913       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"330771f8-27e9-43f2-8913-846d0859293a", ResourceVersion:"1947", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764482345, loc:(*time.Location)(0x20d4640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-z9qkn\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-13T20:12:25Z\",\"renewTime\":\"2021-08-13T20:12:25Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not r
eport event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-z9qkn became leader'
	
	* 
	* ==> etcd [3e737acc6b66b5f48dd56d5484e762f91f4aa2386a2e689a93f86173adde6ed2] <==
	* time="2021-08-13T20:12:26Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-13T20:12:26Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-13T20:12:26Z" level=info msg="etcd-restore-operator Version: 0.9.4"
	time="2021-08-13T20:12:26Z" level=info msg="Git SHA: c8a1c64"
	E0813 20:12:26.113544       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-restore-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"18033575-98bd-4cfa-a23b-cb8cb07d498b", ResourceVersion:"1962", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764482346, loc:(*time.Location)(0x24e11a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"etcd-operator-alm-owned"}, Annotations:map[string]string{"endpoints.kubernetes.io/last-change-trigger-time":"2021-08-13T20:12:26Z", "control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-z9qkn\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-13T20:12:26Z\",\"renewTime\":\"2021-08-13T20:12:26Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), Cl
usterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-z9qkn became leader'
	time="2021-08-13T20:12:26Z" level=info msg="listening on 0.0.0.0:19999"
	time="2021-08-13T20:12:26Z" level=info msg="starting restore controller" pkg=controller
	
	* 
	* ==> etcd [ca4b86da42333083c13c8317a477a1b13911234b4a729275a6fe2c4afbf09af4] <==
	* 2021-08-13 20:14:03.876456 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:13.877412 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:23.875804 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:33.876103 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:43.876710 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:14:53.885385 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:03.877069 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:13.876340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:23.876841 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:33.876886 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:43.876142 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:15:53.879903 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:03.876324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:13.876664 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:23.877179 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:33.876423 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:43.875906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:16:53.877747 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:03.876488 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:13.875435 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:23.875659 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:33.875256 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:43.876794 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:17:53.883472 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:18:03.876664 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:18:06 up 9 min,  0 users,  load average: 1.47, 2.18, 1.51
	Linux addons-20210813200824-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [fc458818370dc36d2cccfda8758af47c043373d26b4cc16525419748168a0a1c] <==
	* I0813 20:12:31.156561       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:13:12.526436       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:13:12.526948       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:13:12.527521       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:13:52.415614       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:13:52.416465       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:13:52.416946       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:14:35.215603       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:14:35.215688       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:14:35.215698       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:15:10.329817       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:15:10.330053       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:15:10.330073       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:15:44.229123       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:15:44.229410       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:15:44.229448       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:16:18.432539       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:16:18.432662       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:16:18.432673       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:16:51.765221       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:16:51.765613       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:16:51.765647       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:17:31.110463       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:17:31.110639       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:17:31.110661       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [0836c7b4d30b423ab16ed6becfc6c861bc1246dfba02c69a81579587af215677] <==
	* I0813 20:11:18.483323       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-create-dp22s"
	I0813 20:11:18.547063       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-5954cc4898 to 1"
	I0813 20:11:18.597081       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-5954cc4898" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-5954cc4898-94h24"
	I0813 20:11:18.658329       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-patch-4mczz"
	I0813 20:11:23.489879       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0813 20:11:24.523244       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0813 20:11:38.989732       1 event.go:291] "Event occurred" object="default/private-image" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set private-image-7ff9c8c74f to 1"
	I0813 20:11:39.027623       1 event.go:291] "Event occurred" object="default/private-image-7ff9c8c74f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: private-image-7ff9c8c74f-qcb6d"
	I0813 20:11:54.163355       1 event.go:291] "Event occurred" object="default/private-image-eu" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set private-image-eu-5956d58f9f to 1"
	I0813 20:11:54.523076       1 event.go:291] "Event occurred" object="default/private-image-eu-5956d58f9f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: private-image-eu-5956d58f9f-85ktb"
	I0813 20:12:03.023678       1 event.go:291] "Event occurred" object="default/hpvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0813 20:12:04.414459       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-6bc352b4-44e1-415a-a380-b3e5e2507bd9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^b98a4bed-fc72-11eb-8157-de3776579278") from node "addons-20210813200824-393438" 
	I0813 20:12:04.995346       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume "pvc-6bc352b4-44e1-415a-a380-b3e5e2507bd9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^b98a4bed-fc72-11eb-8157-de3776579278") from node "addons-20210813200824-393438" 
	I0813 20:12:04.995844       1 event.go:291] "Event occurred" object="default/task-pv-pod" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-6bc352b4-44e1-415a-a380-b3e5e2507bd9\" "
	I0813 20:12:17.965158       1 event.go:291] "Event occurred" object="my-etcd/etcd-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set etcd-operator-85cd4f54cd to 1"
	I0813 20:12:17.989278       1 event.go:291] "Event occurred" object="my-etcd/etcd-operator-85cd4f54cd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: etcd-operator-85cd4f54cd-z9qkn"
	I0813 20:12:20.513272       1 namespace_controller.go:185] Namespace has been deleted ingress-nginx
	I0813 20:12:20.837205       1 namespace_controller.go:185] Namespace has been deleted gcp-auth
	I0813 20:12:38.156890       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for etcdbackups.etcd.database.coreos.com
	I0813 20:12:38.157380       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for etcdrestores.etcd.database.coreos.com
	I0813 20:12:38.157435       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for etcdclusters.etcd.database.coreos.com
	I0813 20:12:38.159057       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0813 20:12:38.260046       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:12:38.949312       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0813 20:12:38.949633       1 shared_informer.go:247] Caches are synced for garbage collector 
	
	* 
	* ==> kube-proxy [87c6001e81327236e3ac4335bec94bad35287656d27afbac8a4e47224446822b] <==
	* I0813 20:09:39.479546       1 node.go:172] Successfully retrieved node IP: 192.168.39.71
	I0813 20:09:39.479847       1 server_others.go:140] Detected node IP 192.168.39.71
	W0813 20:09:39.480044       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:09:39.575235       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:09:39.575273       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:09:39.575286       1 server_others.go:212] Using iptables Proxier.
	I0813 20:09:39.576045       1 server.go:643] Version: v1.21.3
	I0813 20:09:39.576716       1 config.go:315] Starting service config controller
	I0813 20:09:39.576730       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:09:39.576747       1 config.go:224] Starting endpoint slice config controller
	I0813 20:09:39.576752       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:09:39.583144       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:09:39.586714       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:09:39.677809       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:09:39.677942       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [b6934d633e20754740756e6423c40bf719144c193b1691c2a927017825d7d6a5] <==
	* I0813 20:09:21.886559       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:09:21.886571       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:09:21.893411       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:09:21.894713       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:09:21.894764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:21.895236       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:21.895439       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:09:21.895575       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:21.895623       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:09:21.895766       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:09:21.895901       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:09:21.895944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:09:21.896071       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:09:21.895103       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:09:21.895145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:09:21.895196       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:22.715238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:22.799228       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:09:22.833904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:09:22.850190       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:09:22.894784       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:09:22.958099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:23.227407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:09:23.305482       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0813 20:09:25.687261       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:08:36 UTC, end at Fri 2021-08-13 20:18:06 UTC. --
	Aug 13 20:15:10 addons-20210813200824-393438 kubelet[2802]: I0813 20:15:10.621302    2802 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:15:10 addons-20210813200824-393438 kubelet[2802]: I0813 20:15:10.621494    2802 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	Aug 13 20:15:11 addons-20210813200824-393438 kubelet[2802]: I0813 20:15:11.125805    2802 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:15:40 addons-20210813200824-393438 kubelet[2802]: I0813 20:15:40.125462    2802 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-qcb6d" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:15:42 addons-20210813200824-393438 kubelet[2802]: I0813 20:15:42.126461    2802 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-85ktb" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:16:14 addons-20210813200824-393438 kubelet[2802]: E0813 20:16:14.456920    2802 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/host-path/ac97e3e5-6a9e-42fc-98f9-0b7b5e76359e-gcp-creds podName:ac97e3e5-6a9e-42fc-98f9-0b7b5e76359e nodeName:}" failed. No retries permitted until 2021-08-13 20:18:16.456186362 +0000 UTC m=+531.102339319 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ac97e3e5-6a9e-42fc-98f9-0b7b5e76359e-gcp-creds\") pod \"task-pv-pod\" (UID: \"ac97e3e5-6a9e-42fc-98f9-0b7b5e76359e\") : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file"
	Aug 13 20:16:19 addons-20210813200824-393438 kubelet[2802]: I0813 20:16:19.125265    2802 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:16:20 addons-20210813200824-393438 kubelet[2802]: I0813 20:16:20.125614    2802 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:16:24 addons-20210813200824-393438 kubelet[2802]: E0813 20:16:24.127154    2802 kubelet.go:1701] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[gcp-creds], unattached volumes=[task-pv-storage kube-api-access-945zl gcp-creds]: timed out waiting for the condition" pod="default/task-pv-pod"
	Aug 13 20:16:24 addons-20210813200824-393438 kubelet[2802]: E0813 20:16:24.127315    2802 pod_workers.go:190] "Error syncing pod, skipping" err="unmounted volumes=[gcp-creds], unattached volumes=[task-pv-storage kube-api-access-945zl gcp-creds]: timed out waiting for the condition" pod="default/task-pv-pod" podUID=ac97e3e5-6a9e-42fc-98f9-0b7b5e76359e
	Aug 13 20:16:26 addons-20210813200824-393438 kubelet[2802]: I0813 20:16:26.011398    2802 clientconn.go:106] parsed scheme: ""
	Aug 13 20:16:26 addons-20210813200824-393438 kubelet[2802]: I0813 20:16:26.011503    2802 clientconn.go:106] scheme "" not registered, fallback to default scheme
	Aug 13 20:16:26 addons-20210813200824-393438 kubelet[2802]: I0813 20:16:26.011577    2802 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi-hostpath/csi.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:16:26 addons-20210813200824-393438 kubelet[2802]: I0813 20:16:26.011596    2802 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:16:26 addons-20210813200824-393438 kubelet[2802]: I0813 20:16:26.011905    2802 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	Aug 13 20:16:41 addons-20210813200824-393438 kubelet[2802]: I0813 20:16:41.126058    2802 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-qcb6d" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:16:49 addons-20210813200824-393438 kubelet[2802]: I0813 20:16:49.124859    2802 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-85ktb" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:17:21 addons-20210813200824-393438 kubelet[2802]: I0813 20:17:21.126194    2802 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:17:35 addons-20210813200824-393438 kubelet[2802]: I0813 20:17:35.129673    2802 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 20:17:38 addons-20210813200824-393438 kubelet[2802]: I0813 20:17:38.362444    2802 clientconn.go:106] parsed scheme: ""
	Aug 13 20:17:38 addons-20210813200824-393438 kubelet[2802]: I0813 20:17:38.362557    2802 clientconn.go:106] scheme "" not registered, fallback to default scheme
	Aug 13 20:17:38 addons-20210813200824-393438 kubelet[2802]: I0813 20:17:38.362659    2802 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi-hostpath/csi.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:17:38 addons-20210813200824-393438 kubelet[2802]: I0813 20:17:38.362676    2802 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:17:38 addons-20210813200824-393438 kubelet[2802]: I0813 20:17:38.362768    2802 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	Aug 13 20:17:58 addons-20210813200824-393438 kubelet[2802]: I0813 20:17:58.125390    2802 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-qcb6d" secret="" err="secret \"gcp-auth\" not found"
	
	* 
	* ==> storage-provisioner [cbfa35fa4df5f4debb8503bab761817df8d0a280ccec78a5b66c7d251259222d] <==
	* I0813 20:09:57.287759       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 20:09:57.381590       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 20:09:57.397355       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 20:09:57.426687       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 20:09:57.427057       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210813200824-393438_bfa9f873-ba0f-4b6f-a73a-cea46ed8c526!
	I0813 20:09:57.430506       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49d9e163-3b40-4ba7-9ec0-9c0877ba53ab", APIVersion:"v1", ResourceVersion:"888", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210813200824-393438_bfa9f873-ba0f-4b6f-a73a-cea46ed8c526 became leader
	I0813 20:09:57.538473       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210813200824-393438_bfa9f873-ba0f-4b6f-a73a-cea46ed8c526!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210813200824-393438 -n addons-20210813200824-393438
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210813200824-393438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: task-pv-pod
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210813200824-393438 describe pod task-pv-pod
helpers_test.go:281: (dbg) kubectl --context addons-20210813200824-393438 describe pod task-pv-pod:

                                                
                                                
-- stdout --
	Name:         task-pv-pod
	Namespace:    default
	Priority:     0
	Node:         addons-20210813200824-393438/192.168.39.71
	Start Time:   Fri, 13 Aug 2021 20:12:04 +0000
	Labels:       app=task-pv-pod
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      k8s-minikube
	      GCP_PROJECT:                     k8s-minikube
	      GCLOUD_PROJECT:                  k8s-minikube
	      GOOGLE_CLOUD_PROJECT:            k8s-minikube
	      CLOUDSDK_CORE_PROJECT:           k8s-minikube
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-945zl (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-945zl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                   Age                   From                                      Message
	  ----     ------                   ----                  ----                                      -------
	  Normal   Scheduled                6m2s                  default-scheduler                         Successfully assigned default/task-pv-pod to addons-20210813200824-393438
	  Normal   SuccessfulAttachVolume   6m2s                  attachdetach-controller                   AttachVolume.Attach succeeded for volume "pvc-6bc352b4-44e1-415a-a380-b3e5e2507bd9"
	  Warning  VolumeConditionAbnormal  6m1s (x10 over 6m2s)  csi-pv-monitor-agent-hostpath.csi.k8s.io  The volume isn't mounted
	  Warning  FailedMount              3m59s                 kubelet                                   Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[kube-api-access-945zl gcp-creds task-pv-storage]: timed out waiting for the condition
	  Warning  FailedMount              112s (x10 over 6m2s)  kubelet                                   MountVolume.SetUp failed for volume "gcp-creds" : hostPath type check failed: /var/lib/minikube/google_application_credentials.json is not a file
	  Warning  FailedMount              102s                  kubelet                                   Unable to attach or mount volumes: unmounted volumes=[gcp-creds], unattached volumes=[task-pv-storage kube-api-access-945zl gcp-creds]: timed out waiting for the condition
	  Normal   VolumeConditionNormal    62s (x41 over 5m2s)   csi-pv-monitor-agent-hostpath.csi.k8s.io  The Volume returns to the healthy state

                                                
                                                
-- /stdout --
helpers_test.go:284: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:285: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (364.17s)

                                                
                                    
x
+
TestPause/serial/Pause (26.42s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813205520-393438 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210813205520-393438 --alsologtostderr -v=5: exit status 80 (2.077906139s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210813205520-393438 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:58:28.883779  429631 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:58:28.890051  429631 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:58:28.890067  429631 out.go:311] Setting ErrFile to fd 2...
	I0813 20:58:28.890072  429631 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:58:28.890170  429631 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:58:28.890357  429631 out.go:305] Setting JSON to false
	I0813 20:58:28.890380  429631 mustload.go:65] Loading cluster: pause-20210813205520-393438
	I0813 20:58:28.890702  429631 config.go:177] Loaded profile config "pause-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:58:28.891070  429631 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:28.891112  429631 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:28.929795  429631 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38423
	I0813 20:58:28.930685  429631 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:28.931399  429631 main.go:130] libmachine: Using API Version  1
	I0813 20:58:28.931424  429631 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:28.931799  429631 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:28.931979  429631 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetState
	I0813 20:58:28.935640  429631 host.go:66] Checking if "pause-20210813205520-393438" exists ...
	I0813 20:58:28.936128  429631 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:28.936174  429631 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:28.949846  429631 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0813 20:58:28.950261  429631 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:28.950798  429631 main.go:130] libmachine: Using API Version  1
	I0813 20:58:28.950823  429631 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:28.951190  429631 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:28.951432  429631 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:28.952237  429631 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210813205520-393438 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:58:28.954906  429631 out.go:177] * Pausing node pause-20210813205520-393438 ... 
	I0813 20:58:28.954950  429631 host.go:66] Checking if "pause-20210813205520-393438" exists ...
	I0813 20:58:28.955372  429631 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:28.955420  429631 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:28.966603  429631 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35503
	I0813 20:58:28.967036  429631 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:28.967611  429631 main.go:130] libmachine: Using API Version  1
	I0813 20:58:28.967634  429631 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:28.968131  429631 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:28.968326  429631 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:28.969078  429631 ssh_runner.go:149] Run: systemctl --version
	I0813 20:58:28.969110  429631 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:28.975246  429631 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:28.975698  429631 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:28.975725  429631 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:28.975875  429631 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:28.976039  429631 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:28.976222  429631 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:28.976373  429631 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813205520-393438/id_rsa Username:docker}
	I0813 20:58:29.074314  429631 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:58:29.087625  429631 pause.go:50] kubelet running: true
	I0813 20:58:29.087708  429631 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:58:29.405482  429631 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:58:29.405618  429631 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:58:29.582949  429631 cri.go:76] found id: "33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81"
	I0813 20:58:29.582985  429631 cri.go:76] found id: "b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d"
	I0813 20:58:29.582992  429631 cri.go:76] found id: "afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5"
	I0813 20:58:29.582998  429631 cri.go:76] found id: "57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849"
	I0813 20:58:29.583032  429631 cri.go:76] found id: "1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c"
	I0813 20:58:29.583045  429631 cri.go:76] found id: "0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5"
	I0813 20:58:29.583051  429631 cri.go:76] found id: "1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c"
	I0813 20:58:29.583058  429631 cri.go:76] found id: "1bba0d6deb03392a9c2a729aa9c03a18c3e1586cd458a1f081392f4b04d0ae62"
	I0813 20:58:29.583067  429631 cri.go:76] found id: "63c0cc1fc4c0cb78fac8fe29e80eed8b43fa6762ce189d85564911aed6114ba0"
	I0813 20:58:29.583080  429631 cri.go:76] found id: "698bbea7ce6e9ce2ff33d763621c6d0ae027c7205d816ea431cafc6e045b6889"
	I0813 20:58:29.583088  429631 cri.go:76] found id: "df02c38abac90e1bfb1eaa8433ba9faac330d654e786d0c41901507b55d0c418"
	I0813 20:58:29.583115  429631 cri.go:76] found id: "68bad432830642a2624a04015efd233270944ea918f0f82217367834481cc3a8"
	I0813 20:58:29.583134  429631 cri.go:76] found id: "11c2753c9a8a79ebfb2fe156a698be51aed9e9d6ac5dfc0af27d0a4822c7d016"
	I0813 20:58:29.583145  429631 cri.go:76] found id: ""
	I0813 20:58:29.583195  429631 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:58:29.629154  429631 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5","pid":4658,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5/rootfs","created":"2021-08-13T20:58:12.412888441Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c","pid":4590,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c/rootfs","created":"2021-08-13T20:58:11.057580039Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c","pid":4542,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c/rootfs","created":"2021-08-13T20:58:10.472076836Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872"},"owner":"root"
},{"ociVersion":"1.0.2-dev","id":"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81","pid":4945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81/rootfs","created":"2021-08-13T20:58:26.807760136Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","pid":4374,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f41ec729ef71933ec60f8fb63287541
9302e95acc029a319006f332461cf7cf/rootfs","created":"2021-08-13T20:58:09.064644692Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210813205520-393438_86a000e5c08d32d80b2fd4e89cd34dd1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94","pid":4269,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94/rootfs","created":"2021-08-13T20:58:08.848304205Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94","io.kubernetes.cri.sandbox-log-dire
ctory":"/var/log/pods/kube-system_kube-proxy-mlf5c_c0812228-e936-4bfa-9fbb-a4d0707f2a63"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872","pid":4244,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872/rootfs","created":"2021-08-13T20:58:08.637074413Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210813205520-393438_469cea0375ae276925a50e4dde7e4ace"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849","pid":4624,"status":"running","bundle":"/run/containerd/io.cont
ainerd.runtime.v2.task/k8s.io/57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849/rootfs","created":"2021-08-13T20:58:11.449040242Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a","pid":4909,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a/rootfs","created":"2021-08-13T20:58:26.026296621Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.
cri.sandbox-id":"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_99920d7c-bb8d-4c65-bf44-b56f23a40e53"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","pid":4366,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22/rootfs","created":"2021-08-13T20:58:09.044079666Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210813205520-393438_36ca0d21ef43020c8f018e62049ff15f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"afabb5f1
3041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5","pid":4682,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5/rootfs","created":"2021-08-13T20:58:11.953431943Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d","pid":4701,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d/rootfs","created":"2021-
08-13T20:58:12.144003819Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","pid":4318,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1/rootfs","created":"2021-08-13T20:58:08.900909328Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813205520-393438_81d9f8c777d9fb26ff8b7d9c93d26d5e"},"owner":"root"},{"ociVers
ion":"1.0.2-dev","id":"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682","pid":4486,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682/rootfs","created":"2021-08-13T20:58:09.821697744Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-jzmnb_ea00ae4c-f4d9-414c-8762-6314a96c8a06"},"owner":"root"}]
	I0813 20:58:29.629365  429631 cri.go:113] list returned 14 containers
	I0813 20:58:29.629382  429631 cri.go:116] container: {ID:0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5 Status:running}
	I0813 20:58:29.629393  429631 cri.go:116] container: {ID:1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c Status:running}
	I0813 20:58:29.629398  429631 cri.go:116] container: {ID:1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c Status:running}
	I0813 20:58:29.629407  429631 cri.go:116] container: {ID:33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81 Status:running}
	I0813 20:58:29.629417  429631 cri.go:116] container: {ID:3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf Status:running}
	I0813 20:58:29.629425  429631 cri.go:118] skipping 3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf - not in ps
	I0813 20:58:29.629435  429631 cri.go:116] container: {ID:47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94 Status:running}
	I0813 20:58:29.629447  429631 cri.go:118] skipping 47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94 - not in ps
	I0813 20:58:29.629456  429631 cri.go:116] container: {ID:53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872 Status:running}
	I0813 20:58:29.629463  429631 cri.go:118] skipping 53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872 - not in ps
	I0813 20:58:29.629471  429631 cri.go:116] container: {ID:57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849 Status:running}
	I0813 20:58:29.629476  429631 cri.go:116] container: {ID:76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a Status:running}
	I0813 20:58:29.629484  429631 cri.go:118] skipping 76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a - not in ps
	I0813 20:58:29.629489  429631 cri.go:116] container: {ID:a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22 Status:running}
	I0813 20:58:29.629499  429631 cri.go:118] skipping a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22 - not in ps
	I0813 20:58:29.629505  429631 cri.go:116] container: {ID:afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5 Status:running}
	I0813 20:58:29.629514  429631 cri.go:116] container: {ID:b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d Status:running}
	I0813 20:58:29.629521  429631 cri.go:116] container: {ID:ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1 Status:running}
	I0813 20:58:29.629531  429631 cri.go:118] skipping ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1 - not in ps
	I0813 20:58:29.629537  429631 cri.go:116] container: {ID:cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682 Status:running}
	I0813 20:58:29.629547  429631 cri.go:118] skipping cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682 - not in ps
	I0813 20:58:29.629597  429631 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5
	I0813 20:58:29.658850  429631 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5 1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c
	I0813 20:58:29.691721  429631 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5 1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:58:29Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 20:58:29.968106  429631 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:58:29.983502  429631 pause.go:50] kubelet running: false
	I0813 20:58:29.983574  429631 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:58:30.271170  429631 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:58:30.271285  429631 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:58:30.447672  429631 cri.go:76] found id: "33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81"
	I0813 20:58:30.447704  429631 cri.go:76] found id: "b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d"
	I0813 20:58:30.447711  429631 cri.go:76] found id: "afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5"
	I0813 20:58:30.447717  429631 cri.go:76] found id: "57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849"
	I0813 20:58:30.447733  429631 cri.go:76] found id: "1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c"
	I0813 20:58:30.447742  429631 cri.go:76] found id: "0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5"
	I0813 20:58:30.447747  429631 cri.go:76] found id: "1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c"
	I0813 20:58:30.447752  429631 cri.go:76] found id: "1bba0d6deb03392a9c2a729aa9c03a18c3e1586cd458a1f081392f4b04d0ae62"
	I0813 20:58:30.447757  429631 cri.go:76] found id: "63c0cc1fc4c0cb78fac8fe29e80eed8b43fa6762ce189d85564911aed6114ba0"
	I0813 20:58:30.447768  429631 cri.go:76] found id: "698bbea7ce6e9ce2ff33d763621c6d0ae027c7205d816ea431cafc6e045b6889"
	I0813 20:58:30.447777  429631 cri.go:76] found id: "df02c38abac90e1bfb1eaa8433ba9faac330d654e786d0c41901507b55d0c418"
	I0813 20:58:30.447782  429631 cri.go:76] found id: "68bad432830642a2624a04015efd233270944ea918f0f82217367834481cc3a8"
	I0813 20:58:30.447788  429631 cri.go:76] found id: "11c2753c9a8a79ebfb2fe156a698be51aed9e9d6ac5dfc0af27d0a4822c7d016"
	I0813 20:58:30.447793  429631 cri.go:76] found id: ""
	I0813 20:58:30.447842  429631 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:58:30.511580  429631 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5","pid":4658,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5/rootfs","created":"2021-08-13T20:58:12.412888441Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c","pid":4590,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c","rootfs":"/run/containerd/io.containerd.runtime.
v2.task/k8s.io/1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c/rootfs","created":"2021-08-13T20:58:11.057580039Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c","pid":4542,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c/rootfs","created":"2021-08-13T20:58:10.472076836Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872"},"owner":"root"}
,{"ociVersion":"1.0.2-dev","id":"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81","pid":4945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81/rootfs","created":"2021-08-13T20:58:26.807760136Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","pid":4374,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f41ec729ef71933ec60f8fb632875419
302e95acc029a319006f332461cf7cf/rootfs","created":"2021-08-13T20:58:09.064644692Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210813205520-393438_86a000e5c08d32d80b2fd4e89cd34dd1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94","pid":4269,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94/rootfs","created":"2021-08-13T20:58:08.848304205Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94","io.kubernetes.cri.sandbox-log-direc
tory":"/var/log/pods/kube-system_kube-proxy-mlf5c_c0812228-e936-4bfa-9fbb-a4d0707f2a63"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872","pid":4244,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872/rootfs","created":"2021-08-13T20:58:08.637074413Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210813205520-393438_469cea0375ae276925a50e4dde7e4ace"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849","pid":4624,"status":"running","bundle":"/run/containerd/io.conta
inerd.runtime.v2.task/k8s.io/57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849/rootfs","created":"2021-08-13T20:58:11.449040242Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a","pid":4909,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a/rootfs","created":"2021-08-13T20:58:26.026296621Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.c
ri.sandbox-id":"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_99920d7c-bb8d-4c65-bf44-b56f23a40e53"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","pid":4366,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22/rootfs","created":"2021-08-13T20:58:09.044079666Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210813205520-393438_36ca0d21ef43020c8f018e62049ff15f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"afabb5f13
041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5","pid":4682,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5/rootfs","created":"2021-08-13T20:58:11.953431943Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d","pid":4701,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d/rootfs","created":"2021-0
8-13T20:58:12.144003819Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","pid":4318,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1/rootfs","created":"2021-08-13T20:58:08.900909328Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813205520-393438_81d9f8c777d9fb26ff8b7d9c93d26d5e"},"owner":"root"},{"ociVersi
on":"1.0.2-dev","id":"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682","pid":4486,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682/rootfs","created":"2021-08-13T20:58:09.821697744Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-jzmnb_ea00ae4c-f4d9-414c-8762-6314a96c8a06"},"owner":"root"}]
	I0813 20:58:30.511870  429631 cri.go:113] list returned 14 containers
	I0813 20:58:30.511892  429631 cri.go:116] container: {ID:0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5 Status:paused}
	I0813 20:58:30.511908  429631 cri.go:122] skipping {0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5 paused}: state = "paused", want "running"
	I0813 20:58:30.511934  429631 cri.go:116] container: {ID:1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c Status:running}
	I0813 20:58:30.511958  429631 cri.go:116] container: {ID:1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c Status:running}
	I0813 20:58:30.511966  429631 cri.go:116] container: {ID:33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81 Status:running}
	I0813 20:58:30.511974  429631 cri.go:116] container: {ID:3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf Status:running}
	I0813 20:58:30.511986  429631 cri.go:118] skipping 3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf - not in ps
	I0813 20:58:30.511993  429631 cri.go:116] container: {ID:47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94 Status:running}
	I0813 20:58:30.512005  429631 cri.go:118] skipping 47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94 - not in ps
	I0813 20:58:30.512014  429631 cri.go:116] container: {ID:53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872 Status:running}
	I0813 20:58:30.512034  429631 cri.go:118] skipping 53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872 - not in ps
	I0813 20:58:30.512042  429631 cri.go:116] container: {ID:57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849 Status:running}
	I0813 20:58:30.512051  429631 cri.go:116] container: {ID:76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a Status:running}
	I0813 20:58:30.512061  429631 cri.go:118] skipping 76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a - not in ps
	I0813 20:58:30.512069  429631 cri.go:116] container: {ID:a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22 Status:running}
	I0813 20:58:30.512081  429631 cri.go:118] skipping a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22 - not in ps
	I0813 20:58:30.512087  429631 cri.go:116] container: {ID:afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5 Status:running}
	I0813 20:58:30.512105  429631 cri.go:116] container: {ID:b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d Status:running}
	I0813 20:58:30.512116  429631 cri.go:116] container: {ID:ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1 Status:running}
	I0813 20:58:30.512124  429631 cri.go:118] skipping ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1 - not in ps
	I0813 20:58:30.512130  429631 cri.go:116] container: {ID:cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682 Status:running}
	I0813 20:58:30.512137  429631 cri.go:118] skipping cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682 - not in ps
	I0813 20:58:30.512210  429631 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c
	I0813 20:58:30.571372  429631 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c 1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c
	I0813 20:58:30.774625  429631 out.go:177] 
	W0813 20:58:30.774867  429631 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c 1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:58:30Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c 1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:58:30Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:58:30.774885  429631 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:58:30.896245  429631 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:58:30.897645  429631 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210813205520-393438 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813205520-393438 -n pause-20210813205520-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813205520-393438 -n pause-20210813205520-393438: exit status 2 (334.272994ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813205520-393438 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210813205520-393438 logs -n 25: exit status 110 (11.921235551s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                             Args                             |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210813202658-393438 cp testdata/cp-test.txt      | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:31 UTC | Fri, 13 Aug 2021 20:30:31 UTC |
	|         | multinode-20210813202658-393438-m03:/home/docker/cp-test.txt |                                          |         |         |                               |                               |
	| -p      | multinode-20210813202658-393438                              | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:31 UTC | Fri, 13 Aug 2021 20:30:31 UTC |
	|         | ssh -n                                                       |                                          |         |         |                               |                               |
	|         | multinode-20210813202658-393438-m03                          |                                          |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                            |                                          |         |         |                               |                               |
	| -p      | multinode-20210813202658-393438                              | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:31 UTC | Fri, 13 Aug 2021 20:30:33 UTC |
	|         | node stop m03                                                |                                          |         |         |                               |                               |
	| -p      | multinode-20210813202658-393438                              | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:34 UTC | Fri, 13 Aug 2021 20:31:44 UTC |
	|         | node start m03                                               |                                          |         |         |                               |                               |
	|         | --alsologtostderr                                            |                                          |         |         |                               |                               |
	| stop    | -p                                                           | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:45 UTC | Fri, 13 Aug 2021 20:34:51 UTC |
	|         | multinode-20210813202658-393438                              |                                          |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:34:51 UTC | Fri, 13 Aug 2021 20:40:57 UTC |
	|         | multinode-20210813202658-393438                              |                                          |         |         |                               |                               |
	|         | --wait=true -v=8                                             |                                          |         |         |                               |                               |
	|         | --alsologtostderr                                            |                                          |         |         |                               |                               |
	| -p      | multinode-20210813202658-393438                              | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:58 UTC | Fri, 13 Aug 2021 20:40:59 UTC |
	|         | node delete m03                                              |                                          |         |         |                               |                               |
	| -p      | multinode-20210813202658-393438                              | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:00 UTC | Fri, 13 Aug 2021 20:44:04 UTC |
	|         | stop                                                         |                                          |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:04 UTC | Fri, 13 Aug 2021 20:48:01 UTC |
	|         | multinode-20210813202658-393438                              |                                          |         |         |                               |                               |
	|         | --wait=true -v=8                                             |                                          |         |         |                               |                               |
	|         | --alsologtostderr --driver=kvm2                              |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd                              |                                          |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210813202658-393438-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:01 UTC | Fri, 13 Aug 2021 20:49:01 UTC |
	|         | multinode-20210813202658-393438-m03                          |                                          |         |         |                               |                               |
	|         | --driver=kvm2                                                |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                               |                                          |         |         |                               |                               |
	| delete  | -p                                                           | multinode-20210813202658-393438-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:02 UTC | Fri, 13 Aug 2021 20:49:03 UTC |
	|         | multinode-20210813202658-393438-m03                          |                                          |         |         |                               |                               |
	| delete  | -p                                                           | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:03 UTC | Fri, 13 Aug 2021 20:49:05 UTC |
	|         | multinode-20210813202658-393438                              |                                          |         |         |                               |                               |
	| start   | -p                                                           | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:38 UTC | Fri, 13 Aug 2021 20:52:46 UTC |
	|         | test-preload-20210813205038-393438                           |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                              |                                          |         |         |                               |                               |
	|         | --wait=true --preload=false                                  |                                          |         |         |                               |                               |
	|         | --driver=kvm2                                                |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                               |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0                                 |                                          |         |         |                               |                               |
	| ssh     | -p                                                           | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:47 UTC | Fri, 13 Aug 2021 20:52:48 UTC |
	|         | test-preload-20210813205038-393438                           |                                          |         |         |                               |                               |
	|         | -- sudo crictl pull busybox                                  |                                          |         |         |                               |                               |
	| start   | -p                                                           | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:48 UTC | Fri, 13 Aug 2021 20:53:39 UTC |
	|         | test-preload-20210813205038-393438                           |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                              |                                          |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2                               |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd                              |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3                                 |                                          |         |         |                               |                               |
	| ssh     | -p                                                           | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:39 UTC | Fri, 13 Aug 2021 20:53:39 UTC |
	|         | test-preload-20210813205038-393438                           |                                          |         |         |                               |                               |
	|         | -- sudo crictl image ls                                      |                                          |         |         |                               |                               |
	| delete  | -p                                                           | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:39 UTC | Fri, 13 Aug 2021 20:53:41 UTC |
	|         | test-preload-20210813205038-393438                           |                                          |         |         |                               |                               |
	| start   | -p                                                           | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:41 UTC | Fri, 13 Aug 2021 20:54:41 UTC |
	|         | scheduled-stop-20210813205341-393438                         |                                          |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2                                  |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                               |                                          |         |         |                               |                               |
	| stop    | -p                                                           | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:42 UTC | Fri, 13 Aug 2021 20:54:42 UTC |
	|         | scheduled-stop-20210813205341-393438                         |                                          |         |         |                               |                               |
	|         | --cancel-scheduled                                           |                                          |         |         |                               |                               |
	| stop    | -p                                                           | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:55 UTC | Fri, 13 Aug 2021 20:55:02 UTC |
	|         | scheduled-stop-20210813205341-393438                         |                                          |         |         |                               |                               |
	|         | --schedule 5s                                                |                                          |         |         |                               |                               |
	| delete  | -p                                                           | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:20 UTC | Fri, 13 Aug 2021 20:55:20 UTC |
	|         | scheduled-stop-20210813205341-393438                         |                                          |         |         |                               |                               |
	| start   | -p                                                           | offline-containerd-20210813205520-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:21 UTC | Fri, 13 Aug 2021 20:57:33 UTC |
	|         | offline-containerd-20210813205520-393438                     |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048                         |                                          |         |         |                               |                               |
	|         | --wait=true --driver=kvm2                                    |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                               |                                          |         |         |                               |                               |
	| delete  | -p                                                           | offline-containerd-20210813205520-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:33 UTC | Fri, 13 Aug 2021 20:57:35 UTC |
	|         | offline-containerd-20210813205520-393438                     |                                          |         |         |                               |                               |
	| start   | -p pause-20210813205520-393438                               | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:21 UTC | Fri, 13 Aug 2021 20:57:54 UTC |
	|         | --memory=2048                                                |                                          |         |         |                               |                               |
	|         | --install-addons=false                                       |                                          |         |         |                               |                               |
	|         | --wait=all --driver=kvm2                                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                               |                                          |         |         |                               |                               |
	| start   | -p pause-20210813205520-393438                               | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:54 UTC | Fri, 13 Aug 2021 20:58:28 UTC |
	|         | --alsologtostderr                                            |                                          |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                                           |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd                               |                                          |         |         |                               |                               |
	|---------|--------------------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:57:54
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:57:54.227836  429419 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:57:54.227958  429419 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:57:54.227968  429419 out.go:311] Setting ErrFile to fd 2...
	I0813 20:57:54.227974  429419 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:57:54.228135  429419 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:57:54.228531  429419 out.go:305] Setting JSON to false
	I0813 20:57:54.278874  429419 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6037,"bootTime":1628882238,"procs":186,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:57:54.279049  429419 start.go:121] virtualization: kvm guest
	I0813 20:57:54.283675  429419 out.go:177] * [pause-20210813205520-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:57:54.283809  429419 notify.go:169] Checking for updates...
	I0813 20:57:54.285488  429419 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:57:54.287249  429419 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:57:54.289101  429419 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:57:54.290719  429419 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:57:54.291123  429419 config.go:177] Loaded profile config "pause-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:57:54.291704  429419 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:57:54.291758  429419 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:57:54.312879  429419 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35945
	I0813 20:57:54.316190  429419 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:57:54.316884  429419 main.go:130] libmachine: Using API Version  1
	I0813 20:57:54.316904  429419 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:57:54.317328  429419 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:57:54.317559  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:57:54.317721  429419 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:57:54.318327  429419 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:57:54.318368  429419 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:57:54.334038  429419 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37183
	I0813 20:57:54.334991  429419 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:57:54.335595  429419 main.go:130] libmachine: Using API Version  1
	I0813 20:57:54.335617  429419 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:57:54.336044  429419 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:57:54.336259  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:57:54.374924  429419 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:57:54.374951  429419 start.go:278] selected driver: kvm2
	I0813 20:57:54.374958  429419 start.go:751] validating driver "kvm2" against &{Name:pause-20210813205520-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.21.3 ClusterName:pause-20210813205520-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:57:54.375068  429419 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:57:54.375866  429419 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:57:54.376016  429419 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:57:54.389481  429419 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:57:54.390376  429419 cni.go:93] Creating CNI manager for ""
	I0813 20:57:54.390391  429419 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:57:54.390401  429419 start_flags.go:277] config:
	{Name:pause-20210813205520-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210813205520-393438 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:57:54.390533  429419 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:57:50.331497  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:57:50.331936  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | unable to find current IP address of domain kubernetes-upgrade-20210813205735-393438 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:57:50.331970  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | I0813 20:57:50.331883  429234 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 20:57:52.679005  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:57:52.679491  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | unable to find current IP address of domain kubernetes-upgrade-20210813205735-393438 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:57:52.679525  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | I0813 20:57:52.679444  429234 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 20:57:57.243154  428960 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.1.0: (5.652778978s)
	I0813 20:57:57.243198  428960 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 from cache
	I0813 20:57:57.243216  428960 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0: (5.65286892s)
	I0813 20:57:57.243256  428960 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.20.0: (5.652939109s)
	I0813 20:57:57.243263  428960 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.20.0': No such file or directory
	I0813 20:57:57.243289  428960 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.20.0': No such file or directory
	I0813 20:57:57.243346  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 --> /var/lib/minikube/images/kube-proxy_v1.20.0 (49545216 bytes)
	I0813 20:57:57.243294  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 --> /var/lib/minikube/images/kube-scheduler_v1.20.0 (14016512 bytes)
	I0813 20:57:57.243225  428960 containerd.go:280] Loading image: /var/lib/minikube/images/etcd_3.4.13-0
	I0813 20:57:57.243396  428960 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.20.0: (5.653073476s)
	I0813 20:57:57.243436  428960 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.13-0
	I0813 20:57:57.243467  428960 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.20.0: (5.653059507s)
	I0813 20:57:57.243438  428960 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.20.0': No such file or directory
	I0813 20:57:57.243492  428960 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.20.0': No such file or directory
	I0813 20:57:57.243494  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 --> /var/lib/minikube/images/kube-controller-manager_v1.20.0 (29364736 bytes)
	I0813 20:57:57.243504  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 --> /var/lib/minikube/images/kube-apiserver_v1.20.0 (30411776 bytes)
	I0813 20:57:54.392920  429419 out.go:177] * Starting control plane node pause-20210813205520-393438 in cluster pause-20210813205520-393438
	I0813 20:57:54.392962  429419 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:57:54.393008  429419 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:57:54.393038  429419 cache.go:56] Caching tarball of preloaded images
	I0813 20:57:54.393233  429419 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:57:54.393257  429419 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:57:54.393491  429419 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438/config.json ...
	I0813 20:57:54.393699  429419 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:57:54.393726  429419 start.go:313] acquiring machines lock for pause-20210813205520-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:57:56.047749  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:57:56.048296  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | unable to find current IP address of domain kubernetes-upgrade-20210813205735-393438 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:57:56.048330  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | I0813 20:57:56.048218  429234 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 20:57:59.169571  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:57:59.170032  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Found IP for machine: 192.168.39.75
	I0813 20:57:59.170055  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has current primary IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:57:59.170065  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Reserving static IP address...
	I0813 20:57:59.170394  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-20210813205735-393438", mac: "52:54:00:50:ef:93", ip: "192.168.39.75"} in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.353104  429197 start.go:317] acquired machines lock for "running-upgrade-20210813205520-393438" in 20.583632989s
	I0813 20:58:01.353144  429197 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:58:01.353155  429197 fix.go:55] fixHost starting: 
	I0813 20:58:01.353619  429197 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:01.353675  429197 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:01.367986  429197 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0813 20:58:01.368354  429197 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:01.368854  429197 main.go:130] libmachine: Using API Version  1
	I0813 20:58:01.368880  429197 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:01.369265  429197 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:01.369438  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .DriverName
	I0813 20:58:01.369539  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetState
	I0813 20:58:01.372492  429197 fix.go:108] recreateIfNeeded on running-upgrade-20210813205520-393438: state=Running err=<nil>
	W0813 20:58:01.372512  429197 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:58:02.369671  429419 start.go:317] acquired machines lock for "pause-20210813205520-393438" in 7.975923017s
	I0813 20:58:02.369767  429419 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:58:02.369778  429419 fix.go:55] fixHost starting: 
	I0813 20:58:02.370221  429419 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:02.370267  429419 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:02.389834  429419 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39355
	I0813 20:58:02.390322  429419 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:02.391005  429419 main.go:130] libmachine: Using API Version  1
	I0813 20:58:02.391038  429419 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:02.391560  429419 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:02.391790  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:02.392267  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetState
	I0813 20:58:02.395467  429419 fix.go:108] recreateIfNeeded on pause-20210813205520-393438: state=Running err=<nil>
	W0813 20:58:02.395488  429419 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:58:00.620023  428960 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.13-0: (3.376559727s)
	I0813 20:58:00.620053  428960 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 from cache
	I0813 20:58:00.620081  428960 containerd.go:280] Loading image: /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0813 20:58:00.620122  428960 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0813 20:58:01.071409  428960 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 from cache
	I0813 20:58:01.071438  428960 containerd.go:280] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I0813 20:58:01.071478  428960 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I0813 20:58:01.801397  428960 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 from cache
	I0813 20:58:01.801444  428960 containerd.go:280] Loading image: /var/lib/minikube/images/kube-apiserver_v1.20.0
	I0813 20:58:01.801499  428960 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.20.0
	I0813 20:58:02.633263  428960 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 from cache
	I0813 20:58:02.633311  428960 containerd.go:280] Loading image: /var/lib/minikube/images/kube-proxy_v1.20.0
	I0813 20:58:02.633369  428960 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.20.0
	I0813 20:58:02.512977  429419 out.go:177] * Updating the running kvm2 "pause-20210813205520-393438" VM ...
	I0813 20:58:02.513037  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:02.513323  429419 machine.go:88] provisioning docker machine ...
	I0813 20:58:02.513356  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:02.513579  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetMachineName
	I0813 20:58:02.513769  429419 buildroot.go:166] provisioning hostname "pause-20210813205520-393438"
	I0813 20:58:02.513794  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetMachineName
	I0813 20:58:02.513948  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:02.519437  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:02.519941  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:02.519978  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:02.520179  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:02.520364  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:02.520484  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:02.520593  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:02.520727  429419 main.go:130] libmachine: Using SSH client type: native
	I0813 20:58:02.520932  429419 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0813 20:58:02.520947  429419 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210813205520-393438 && echo "pause-20210813205520-393438" | sudo tee /etc/hostname
	I0813 20:58:02.664206  429419 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210813205520-393438
	
	I0813 20:58:02.664232  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:02.669771  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:02.670174  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:02.670214  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:02.670299  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:02.670487  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:02.670613  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:02.670768  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:02.670969  429419 main.go:130] libmachine: Using SSH client type: native
	I0813 20:58:02.671142  429419 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0813 20:58:02.671175  429419 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210813205520-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210813205520-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210813205520-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:58:02.801199  429419 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:58:02.801235  429419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:58:02.801292  429419 buildroot.go:174] setting up certificates
	I0813 20:58:02.801305  429419 provision.go:83] configureAuth start
	I0813 20:58:02.801322  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetMachineName
	I0813 20:58:02.801576  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetIP
	I0813 20:58:02.807528  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:02.807983  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:02.808024  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:02.808279  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:02.812654  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:02.813006  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:02.813049  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:02.813153  429419 provision.go:138] copyHostCerts
	I0813 20:58:02.813221  429419 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:58:02.813233  429419 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:58:02.813292  429419 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:58:02.813459  429419 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:58:02.813472  429419 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:58:02.813500  429419 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:58:02.813565  429419 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:58:02.813578  429419 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:58:02.813598  429419 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:58:02.813648  429419 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.pause-20210813205520-393438 san=[192.168.61.151 192.168.61.151 localhost 127.0.0.1 minikube pause-20210813205520-393438]
	I0813 20:58:02.895323  429419 provision.go:172] copyRemoteCerts
	I0813 20:58:02.895392  429419 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:58:02.895422  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:02.900956  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:02.901361  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:02.901398  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:02.901520  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:02.901707  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:02.901879  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:02.902043  429419 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813205520-393438/id_rsa Username:docker}
	I0813 20:58:02.992614  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:58:03.012287  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 20:58:03.039330  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:58:03.061251  429419 provision.go:86] duration metric: configureAuth took 259.931214ms
	I0813 20:58:03.061282  429419 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:58:03.061438  429419 config.go:177] Loaded profile config "pause-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:58:03.061456  429419 machine.go:91] provisioned docker machine in 548.114914ms
	I0813 20:58:03.061465  429419 start.go:267] post-start starting for "pause-20210813205520-393438" (driver="kvm2")
	I0813 20:58:03.061474  429419 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:58:03.061504  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:03.061845  429419 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:58:03.061884  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:03.067847  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:03.068220  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:03.068257  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:03.068447  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:03.068641  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:03.068853  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:03.069085  429419 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813205520-393438/id_rsa Username:docker}
	I0813 20:58:03.159483  429419 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:58:03.166585  429419 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:58:03.166611  429419 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:58:03.166684  429419 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:58:03.166788  429419 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 20:58:03.166940  429419 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:58:03.175603  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 20:58:03.200099  429419 start.go:270] post-start completed in 138.617691ms
	I0813 20:58:03.200144  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:03.200450  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:03.206510  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:03.206933  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:03.206965  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:03.207089  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:03.207291  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:03.207456  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:03.207571  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:03.207724  429419 main.go:130] libmachine: Using SSH client type: native
	I0813 20:58:03.207940  429419 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0813 20:58:03.207956  429419 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 20:58:03.332969  429419 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888283.332928680
	
	I0813 20:58:03.333004  429419 fix.go:212] guest clock: 1628888283.332928680
	I0813 20:58:03.333015  429419 fix.go:225] Guest: 2021-08-13 20:58:03.33292868 +0000 UTC Remote: 2021-08-13 20:58:03.200426824 +0000 UTC m=+9.038396484 (delta=132.501856ms)
	I0813 20:58:03.333066  429419 fix.go:196] guest clock delta is within tolerance: 132.501856ms
	I0813 20:58:03.333079  429419 fix.go:57] fixHost completed within 963.299889ms
	I0813 20:58:03.333087  429419 start.go:80] releasing machines lock for "pause-20210813205520-393438", held for 963.343148ms
	I0813 20:58:03.333131  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:03.333414  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetIP
	I0813 20:58:03.339415  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:03.339852  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:03.339914  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:03.340067  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:03.340256  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:03.340945  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:03.341168  429419 ssh_runner.go:149] Run: systemctl --version
	I0813 20:58:03.341194  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:03.341226  429419 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:58:03.341271  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:03.348033  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:03.348444  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:03.348474  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:03.348592  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:03.348817  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:03.348850  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:03.349023  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:03.349185  429419 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813205520-393438/id_rsa Username:docker}
	I0813 20:58:03.349396  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:03.349425  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:03.349608  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:03.349758  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:03.349937  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:03.350092  429419 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813205520-393438/id_rsa Username:docker}
	I0813 20:58:03.488156  429419 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:58:03.488291  429419 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:58:03.542272  429419 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:58:03.542308  429419 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:58:03.542387  429419 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:58:03.559727  429419 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:58:03.573052  429419 docker.go:153] disabling docker service ...
	I0813 20:58:03.573110  429419 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:58:03.588513  429419 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:58:03.601081  429419 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:58:03.807642  429419 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:58:04.025905  429419 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:58:04.038186  429419 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:58:04.051431  429419 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:58:04.067863  429419 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:58:04.074416  429419 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:58:04.080465  429419 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:58:01.375107  429197 out.go:177] * Updating the running kvm2 "running-upgrade-20210813205520-393438" VM ...
	I0813 20:58:01.375136  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .DriverName
	I0813 20:58:01.375318  429197 machine.go:88] provisioning docker machine ...
	I0813 20:58:01.375339  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .DriverName
	I0813 20:58:01.375482  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetMachineName
	I0813 20:58:01.375970  429197 buildroot.go:166] provisioning hostname "running-upgrade-20210813205520-393438"
	I0813 20:58:01.375991  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetMachineName
	I0813 20:58:01.376164  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:01.382332  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:01.382796  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:58:01.382820  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:01.383186  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:01.383367  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:01.383498  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:01.383635  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:01.383755  429197 main.go:130] libmachine: Using SSH client type: native
	I0813 20:58:01.383938  429197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.72.177 22 <nil> <nil>}
	I0813 20:58:01.383955  429197 main.go:130] libmachine: About to run SSH command:
	sudo hostname running-upgrade-20210813205520-393438 && echo "running-upgrade-20210813205520-393438" | sudo tee /etc/hostname
	I0813 20:58:01.526891  429197 main.go:130] libmachine: SSH cmd err, output: <nil>: running-upgrade-20210813205520-393438
	
	I0813 20:58:01.526928  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:01.532560  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:01.532948  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:58:01.532985  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:01.533195  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:01.533405  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:01.533557  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:01.533700  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:01.533893  429197 main.go:130] libmachine: Using SSH client type: native
	I0813 20:58:01.534068  429197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.72.177 22 <nil> <nil>}
	I0813 20:58:01.534095  429197 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-20210813205520-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20210813205520-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-20210813205520-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:58:01.660693  429197 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:58:01.660786  429197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:58:01.660823  429197 buildroot.go:174] setting up certificates
	I0813 20:58:01.660859  429197 provision.go:83] configureAuth start
	I0813 20:58:01.660884  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetMachineName
	I0813 20:58:01.661212  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetIP
	I0813 20:58:01.666792  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:01.667132  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:58:01.667161  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:01.667299  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:01.672142  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:01.672449  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:58:01.672484  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:01.672617  429197 provision.go:138] copyHostCerts
	I0813 20:58:01.672676  429197 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:58:01.672687  429197 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:58:01.672733  429197 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:58:01.672819  429197 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:58:01.672828  429197 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:58:01.672845  429197 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:58:01.672891  429197 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:58:01.672898  429197 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:58:01.672914  429197 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:58:01.672953  429197 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20210813205520-393438 san=[192.168.72.177 192.168.72.177 localhost 127.0.0.1 minikube running-upgrade-20210813205520-393438]
	I0813 20:58:01.955669  429197 provision.go:172] copyRemoteCerts
	I0813 20:58:01.955750  429197 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:58:01.955797  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:01.961518  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:01.961848  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:58:01.961879  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:01.962073  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:01.962296  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:01.962470  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:01.962590  429197 sshutil.go:53] new ssh client: &{IP:192.168.72.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/running-upgrade-20210813205520-393438/id_rsa Username:docker}
	I0813 20:58:02.054935  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0813 20:58:02.074997  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:58:02.092406  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:58:02.110099  429197 provision.go:86] duration metric: configureAuth took 449.221105ms
	I0813 20:58:02.110122  429197 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:58:02.110320  429197 config.go:177] Loaded profile config "running-upgrade-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:58:02.110338  429197 machine.go:91] provisioned docker machine in 735.009311ms
	I0813 20:58:02.110348  429197 start.go:267] post-start starting for "running-upgrade-20210813205520-393438" (driver="kvm2")
	I0813 20:58:02.110356  429197 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:58:02.110388  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .DriverName
	I0813 20:58:02.110765  429197 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:58:02.110807  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:02.116435  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:02.116808  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:58:02.116836  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:02.117005  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:02.117198  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:02.117362  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:02.117528  429197 sshutil.go:53] new ssh client: &{IP:192.168.72.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/running-upgrade-20210813205520-393438/id_rsa Username:docker}
	I0813 20:58:02.206660  429197 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:58:02.211471  429197 info.go:137] Remote host: Buildroot 2020.02.8
	I0813 20:58:02.211494  429197 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:58:02.211553  429197 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:58:02.211677  429197 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 20:58:02.211807  429197 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:58:02.219467  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 20:58:02.239252  429197 start.go:270] post-start completed in 128.890177ms
	I0813 20:58:02.239303  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .DriverName
	I0813 20:58:02.239590  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:02.245021  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:02.245400  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:58:02.245432  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:02.245580  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:02.245771  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:02.245946  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:02.246095  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:02.246247  429197 main.go:130] libmachine: Using SSH client type: native
	I0813 20:58:02.246390  429197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.72.177 22 <nil> <nil>}
	I0813 20:58:02.246400  429197 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 20:58:02.369458  429197 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888282.369620172
	
	I0813 20:58:02.369485  429197 fix.go:212] guest clock: 1628888282.369620172
	I0813 20:58:02.369495  429197 fix.go:225] Guest: 2021-08-13 20:58:02.369620172 +0000 UTC Remote: 2021-08-13 20:58:02.239570477 +0000 UTC m=+21.705648010 (delta=130.049695ms)
	I0813 20:58:02.369521  429197 fix.go:196] guest clock delta is within tolerance: 130.049695ms
	I0813 20:58:02.369534  429197 fix.go:57] fixHost completed within 1.016379041s
	I0813 20:58:02.369545  429197 start.go:80] releasing machines lock for "running-upgrade-20210813205520-393438", held for 1.016415951s
	I0813 20:58:02.369590  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .DriverName
	I0813 20:58:02.369919  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetIP
	I0813 20:58:02.376274  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:02.376670  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:58:02.376702  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:02.376996  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .DriverName
	I0813 20:58:02.377181  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .DriverName
	I0813 20:58:02.377763  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .DriverName
	I0813 20:58:02.378018  429197 ssh_runner.go:149] Run: systemctl --version
	I0813 20:58:02.378051  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:02.378116  429197 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:58:02.378199  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:02.390031  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:02.391575  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:02.391628  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:02.391649  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:02.391676  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:58:02.391696  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:02.391720  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:58:02.391742  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:02.391785  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:02.391820  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:02.391976  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:02.392015  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:02.392165  429197 sshutil.go:53] new ssh client: &{IP:192.168.72.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/running-upgrade-20210813205520-393438/id_rsa Username:docker}
	I0813 20:58:02.392434  429197 sshutil.go:53] new ssh client: &{IP:192.168.72.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/running-upgrade-20210813205520-393438/id_rsa Username:docker}
	I0813 20:58:02.482957  429197 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0813 20:58:02.483058  429197 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:58:02.528626  429197 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:58:02.542800  429197 docker.go:153] disabling docker service ...
	I0813 20:58:02.542854  429197 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:58:02.556787  429197 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:58:02.569356  429197 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:58:02.791696  429197 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:58:03.034623  429197 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:58:03.050969  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:58:03.070245  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CgoJW3BsdWdpbnMuImlvLmNvb
nRhaW5lcmQuZ3JwYy52MS5jcmkiXQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lc10KICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmMub3B0aW9uc10KICAgICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKI
CAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:58:03.089065  429197 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:58:03.096355  429197 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:58:03.103961  429197 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:58:03.305775  429197 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:58:03.376047  429197 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:58:03.376196  429197 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:58:03.386544  429197 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 20:58:04.492070  429197 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:58:04.498525  429197 start.go:413] Will wait 60s for crictl version
	I0813 20:58:04.498600  429197 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:58:04.532736  429197 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.3
	RuntimeApiVersion:  v1alpha2
	I0813 20:58:04.532802  429197 ssh_runner.go:149] Run: containerd --version
	I0813 20:58:04.586917  429197 ssh_runner.go:149] Run: containerd --version
	I0813 20:58:00.213385  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Reserved static IP address: 192.168.39.75
	I0813 20:58:00.221280  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Waiting for SSH to be available...
	I0813 20:58:00.221299  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | Getting to WaitForSSH function...
	I0813 20:58:00.221336  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.221380  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:minikube Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:00.221409  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.221428  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | Using SSH client type: external
	I0813 20:58:00.221459  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa (-rw-------)
	I0813 20:58:00.221511  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:58:00.221528  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | About to run SSH command:
	I0813 20:58:00.221539  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | exit 0
	I0813 20:58:00.363262  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 20:58:00.363799  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) KVM machine creation complete!
	I0813 20:58:00.363855  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetConfigRaw
	I0813 20:58:00.364524  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:58:00.364740  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:58:00.364883  429159 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 20:58:00.364914  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetState
	I0813 20:58:00.368233  429159 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 20:58:00.368251  429159 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 20:58:00.368260  429159 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 20:58:00.368270  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:58:00.373705  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.374033  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:00.374063  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.374182  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:58:00.374390  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:00.374560  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:00.374723  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:58:00.374924  429159 main.go:130] libmachine: Using SSH client type: native
	I0813 20:58:00.375090  429159 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0813 20:58:00.375104  429159 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 20:58:00.502332  429159 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:58:00.502360  429159 main.go:130] libmachine: Detecting the provisioner...
	I0813 20:58:00.502374  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:58:00.508141  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.508493  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:00.508523  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.508695  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:58:00.508912  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:00.509059  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:00.509239  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:58:00.509398  429159 main.go:130] libmachine: Using SSH client type: native
	I0813 20:58:00.509564  429159 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0813 20:58:00.509579  429159 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 20:58:00.635654  429159 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 20:58:00.635727  429159 main.go:130] libmachine: found compatible host: buildroot
	I0813 20:58:00.635742  429159 main.go:130] libmachine: Provisioning with buildroot...
	I0813 20:58:00.635760  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetMachineName
	I0813 20:58:00.636036  429159 buildroot.go:166] provisioning hostname "kubernetes-upgrade-20210813205735-393438"
	I0813 20:58:00.636067  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetMachineName
	I0813 20:58:00.636269  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:58:00.641594  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.641942  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:00.641973  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.642121  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:58:00.642308  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:00.642483  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:00.642657  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:58:00.642857  429159 main.go:130] libmachine: Using SSH client type: native
	I0813 20:58:00.643031  429159 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0813 20:58:00.643053  429159 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210813205735-393438 && echo "kubernetes-upgrade-20210813205735-393438" | sudo tee /etc/hostname
	I0813 20:58:00.778472  429159 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210813205735-393438
	
	I0813 20:58:00.778498  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:58:00.783660  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.784012  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:00.784067  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.784209  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:58:00.784381  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:00.784543  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:00.784694  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:58:00.784870  429159 main.go:130] libmachine: Using SSH client type: native
	I0813 20:58:00.785075  429159 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0813 20:58:00.785109  429159 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210813205735-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210813205735-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210813205735-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:58:00.913703  429159 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:58:00.913743  429159 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:58:00.913787  429159 buildroot.go:174] setting up certificates
	I0813 20:58:00.913800  429159 provision.go:83] configureAuth start
	I0813 20:58:00.913823  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetMachineName
	I0813 20:58:00.914103  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetIP
	I0813 20:58:00.919932  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.920411  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:00.920456  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.920581  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:58:00.925818  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.926126  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:00.926166  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:00.926324  429159 provision.go:138] copyHostCerts
	I0813 20:58:00.926406  429159 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:58:00.926432  429159 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:58:00.926476  429159 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:58:00.926634  429159 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:58:00.926646  429159 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:58:00.926684  429159 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:58:00.926764  429159 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:58:00.926776  429159 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:58:00.926800  429159 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:58:00.926879  429159 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210813205735-393438 san=[192.168.39.75 192.168.39.75 localhost 127.0.0.1 minikube kubernetes-upgrade-20210813205735-393438]
	I0813 20:58:01.059804  429159 provision.go:172] copyRemoteCerts
	I0813 20:58:01.059876  429159 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:58:01.059907  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:58:01.065200  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.065563  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:01.065602  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.065739  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:58:01.065882  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:01.066028  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:58:01.066171  429159 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:58:01.159049  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0813 20:58:01.176032  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 20:58:01.195526  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:58:01.213070  429159 provision.go:86] duration metric: configureAuth took 299.254875ms
	I0813 20:58:01.213100  429159 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:58:01.213276  429159 config.go:177] Loaded profile config "kubernetes-upgrade-20210813205735-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:58:01.213305  429159 main.go:130] libmachine: Checking connection to Docker...
	I0813 20:58:01.213329  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetURL
	I0813 20:58:01.216389  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | Using libvirt version 3000000
	I0813 20:58:01.221577  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.221955  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:01.221990  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.222171  429159 main.go:130] libmachine: Docker is up and running!
	I0813 20:58:01.222207  429159 main.go:130] libmachine: Reticulating splines...
	I0813 20:58:01.222214  429159 client.go:171] LocalClient.Create took 19.667289897s
	I0813 20:58:01.222233  429159 start.go:168] duration metric: libmachine.API.Create for "kubernetes-upgrade-20210813205735-393438" took 19.667394425s
	I0813 20:58:01.222244  429159 start.go:267] post-start starting for "kubernetes-upgrade-20210813205735-393438" (driver="kvm2")
	I0813 20:58:01.222248  429159 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:58:01.222272  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:58:01.222498  429159 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:58:01.222523  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:58:01.227009  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.227359  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:01.227390  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.227487  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:58:01.227646  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:01.227808  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:58:01.227952  429159 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:58:01.315152  429159 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:58:01.319951  429159 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:58:01.319978  429159 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:58:01.320038  429159 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:58:01.320144  429159 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 20:58:01.320263  429159 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:58:01.327417  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 20:58:01.345758  429159 start.go:270] post-start completed in 123.501719ms
	I0813 20:58:01.345816  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetConfigRaw
	I0813 20:58:01.346427  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetIP
	I0813 20:58:01.352086  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.352465  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:01.352501  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.352797  429159 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/config.json ...
	I0813 20:58:01.352984  429159 start.go:129] duration metric: createHost completed in 19.817325773s
	I0813 20:58:01.353000  429159 start.go:80] releasing machines lock for "kubernetes-upgrade-20210813205735-393438", held for 19.817557489s
	I0813 20:58:01.353048  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:58:01.353264  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetIP
	I0813 20:58:01.358297  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.358676  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:01.358709  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.358858  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:58:01.358994  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:58:01.359456  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:58:01.359693  429159 ssh_runner.go:149] Run: systemctl --version
	I0813 20:58:01.359726  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:58:01.359737  429159 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:58:01.359784  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:58:01.365673  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.365889  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:01.365918  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.366274  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:58:01.366498  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:01.366637  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:58:01.367026  429159 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:58:01.367707  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.374603  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:01.374638  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:01.374834  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:58:01.375036  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:01.375217  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:58:01.375347  429159 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:58:01.498522  429159 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0813 20:58:01.498661  429159 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:58:04.631173  429197 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	I0813 20:58:04.631233  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetIP
	I0813 20:58:04.637218  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:04.637588  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:58:04.637614  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:58:04.637880  429197 ssh_runner.go:149] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0813 20:58:04.642486  429197 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:58:04.657473  429197 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0813 20:58:04.657526  429197 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:58:04.698149  429197 containerd.go:609] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I0813 20:58:04.698175  429197 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 20:58:04.698259  429197 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:58:04.698296  429197 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0813 20:58:04.698321  429197 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0813 20:58:04.698485  429197 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0813 20:58:04.698534  429197 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:58:04.698554  429197 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
	I0813 20:58:04.698560  429197 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:58:04.698487  429197 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I0813 20:58:04.698701  429197 image.go:133] retrieving image: k8s.gcr.io/coredns:1.7.0
	I0813 20:58:04.698260  429197 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0813 20:58:04.706868  429197 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.20.0: Error response from daemon: reference does not exist
	I0813 20:58:04.708066  429197 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.20.0: Error response from daemon: reference does not exist
	I0813 20:58:04.715767  429197 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.20.0: Error response from daemon: reference does not exist
	I0813 20:58:04.717334  429197 image.go:171] found k8s.gcr.io/pause:3.2 locally: &{Image:0xc0006405a0}
	I0813 20:58:04.717381  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2 | grep 80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0813 20:58:04.726947  429197 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.20.0: Error response from daemon: reference does not exist
	I0813 20:58:05.046743  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0 | grep ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99"
	I0813 20:58:05.047037  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.20.0 | grep 3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899"
	I0813 20:58:05.047294  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.20.0 | grep 10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc"
	I0813 20:58:05.051798  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0 | grep b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080"
	I0813 20:58:05.362144  429197 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc0006406c0}
	I0813 20:58:05.362239  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5 | grep 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0813 20:58:05.480324  429197 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc0006406c0}
	I0813 20:58:05.480454  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.4 | grep 86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4"
	I0813 20:58:07.719819  428960 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.20.0: (5.086420619s)
	I0813 20:58:07.719849  428960 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 from cache
	I0813 20:58:07.719868  428960 cache_images.go:113] Successfully loaded all cached images
	I0813 20:58:07.719874  428960 cache_images.go:82] LoadImages completed in 24.158820123s
	I0813 20:58:07.719929  428960 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:58:07.775783  428960 cni.go:93] Creating CNI manager for ""
	I0813 20:58:07.775816  428960 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:58:07.775836  428960 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:58:07.775852  428960 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.169 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-20210813205520-393438 NodeName:stopped-upgrade-20210813205520-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.72.169 CgroupDrive
r:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:58:07.776044  428960 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "stopped-upgrade-20210813205520-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:58:07.776185  428960 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=stopped-upgrade-20210813205520-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.169 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-20210813205520-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:58:07.776261  428960 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0813 20:58:07.789939  428960 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:58:07.790015  428960 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:58:07.801822  428960 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (553 bytes)
	I0813 20:58:07.837438  428960 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:58:04.274633  429419 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:58:04.325363  429419 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:58:04.325440  429419 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:58:04.332343  429419 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 20:58:05.440950  429419 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:58:05.454829  429419 retry.go:31] will retry after 2.160763633s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 20:58:07.616021  429419 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:58:07.625125  429419 start.go:413] Will wait 60s for crictl version
	I0813 20:58:07.625202  429419 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:58:07.673919  429419 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:58:07.673993  429419 ssh_runner.go:149] Run: containerd --version
	I0813 20:58:07.730049  429419 ssh_runner.go:149] Run: containerd --version
	I0813 20:58:07.858538  429419 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:58:07.858584  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetIP
	I0813 20:58:07.866501  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:07.866897  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:07.866923  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:07.867447  429419 ssh_runner.go:149] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0813 20:58:07.874725  429419 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:58:07.874789  429419 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:58:07.931997  429419 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:58:07.932024  429419 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:58:07.932086  429419 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:58:08.014663  429419 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:58:08.014706  429419 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:58:08.014762  429419 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:58:08.079166  429419 cni.go:93] Creating CNI manager for ""
	I0813 20:58:08.079204  429419 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:58:08.079216  429419 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:58:08.079231  429419 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.151 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210813205520-393438 NodeName:pause-20210813205520-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.61.151 CgroupDriver:cgroupfs ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:58:08.079388  429419 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20210813205520-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:58:08.079502  429419 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20210813205520-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.151 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210813205520-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:58:08.079563  429419 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:58:08.102835  429419 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:58:08.102912  429419 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:58:08.115661  429419 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (543 bytes)
	I0813 20:58:08.157693  429419 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:58:08.203276  429419 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2083 bytes)
	I0813 20:58:08.256934  429419 ssh_runner.go:149] Run: grep 192.168.61.151	control-plane.minikube.internal$ /etc/hosts
	I0813 20:58:08.281875  429419 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438 for IP: 192.168.61.151
	I0813 20:58:08.281938  429419 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:58:08.281956  429419 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:58:08.282022  429419 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438/client.key
	I0813 20:58:08.282056  429419 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438/apiserver.key.c009ae4a
	I0813 20:58:08.282077  429419 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438/proxy-client.key
	I0813 20:58:08.282200  429419 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 20:58:08.282247  429419 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 20:58:08.282258  429419 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:58:08.282287  429419 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:58:08.282314  429419 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:58:08.282341  429419 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:58:08.282394  429419 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 20:58:08.283807  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:58:08.406838  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:58:08.438798  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:58:08.485422  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:58:08.522308  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:58:08.606848  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:58:08.658827  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:58:08.702848  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 20:58:08.750845  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:58:08.810823  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 20:58:08.870860  429419 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 20:58:08.912634  429419 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:58:08.996309  429419 ssh_runner.go:149] Run: openssl version
	I0813 20:58:09.018539  429419 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 20:58:09.054920  429419 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 20:58:09.070854  429419 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 20:58:09.070922  429419 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 20:58:09.082365  429419 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 20:58:09.097764  429419 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 20:58:09.138606  429419 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 20:58:09.157109  429419 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 20:58:09.157202  429419 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 20:58:05.498928  429159 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.000228359s)
	I0813 20:58:05.499055  429159 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.14.0". assuming images are not preloaded.
	I0813 20:58:05.499118  429159 ssh_runner.go:149] Run: which lz4
	I0813 20:58:05.505579  429159 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:58:05.512832  429159 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:58:05.512872  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (898265258 bytes)
	I0813 20:58:05.581788  429197 image.go:171] found k8s.gcr.io/coredns:1.7.0 locally: &{Image:0xc0000d2980}
	I0813 20:58:05.581925  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0 | grep bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
	I0813 20:58:07.454995  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2 | grep 80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c": (2.737580589s)
	I0813 20:58:07.455107  429197 cache_images.go:106] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0813 20:58:07.456143  429197 cri.go:205] Removing image: k8s.gcr.io/pause:3.2
	I0813 20:58:07.466963  429197 ssh_runner.go:149] Run: which crictl
	I0813 20:58:07.772130  429197 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000640680}
	I0813 20:58:07.772235  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.1.0 | grep 9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db"
	I0813 20:58:08.410519  429197 image.go:171] found k8s.gcr.io/etcd:3.4.13-0 locally: &{Image:0xc0000d2a40}
	I0813 20:58:08.410599  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.13-0 | grep 0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934"
	I0813 20:58:10.164973  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0 | grep ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99": (5.118182034s)
	I0813 20:58:10.165073  429197 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.20.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0813 20:58:10.165112  429197 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0813 20:58:10.165197  429197 ssh_runner.go:149] Run: which crictl
	I0813 20:58:10.165272  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.20.0 | grep 10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc": (5.117959478s)
	I0813 20:58:10.165302  429197 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.20.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0813 20:58:10.165325  429197 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.20.0
	I0813 20:58:10.165357  429197 ssh_runner.go:149] Run: which crictl
	I0813 20:58:10.165415  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.20.0 | grep 3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899": (5.118358244s)
	I0813 20:58:10.165442  429197 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.20.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0813 20:58:10.165460  429197 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0813 20:58:10.165484  429197 ssh_runner.go:149] Run: which crictl
	I0813 20:58:07.867749  428960 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0813 20:58:07.895765  428960 ssh_runner.go:149] Run: grep 192.168.72.169	control-plane.minikube.internal$ /etc/hosts
	I0813 20:58:07.914639  428960 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:58:07.940069  428960 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813205520-393438 for IP: 192.168.72.169
	I0813 20:58:07.940139  428960 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:58:07.940167  428960 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:58:07.940234  428960 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813205520-393438/client.key
	I0813 20:58:07.940258  428960 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813205520-393438/apiserver.key.8ded931d
	I0813 20:58:07.940293  428960 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813205520-393438/proxy-client.key
	I0813 20:58:07.940424  428960 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 20:58:07.940479  428960 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 20:58:07.940493  428960 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:58:07.940541  428960 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:58:07.940586  428960 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:58:07.940620  428960 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:58:07.940732  428960 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 20:58:07.942220  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813205520-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:58:07.979190  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813205520-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:58:08.030737  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813205520-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:58:08.085087  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813205520-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:58:08.113205  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:58:08.163194  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:58:08.203218  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:58:08.247912  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 20:58:08.301867  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 20:58:08.362606  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:58:08.401477  428960 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 20:58:08.450876  428960 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:58:08.475119  428960 ssh_runner.go:149] Run: openssl version
	I0813 20:58:08.491389  428960 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:58:08.518021  428960 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:08.529708  428960 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:08.529784  428960 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:08.543572  428960 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:58:08.555638  428960 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 20:58:08.568265  428960 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 20:58:08.585294  428960 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 20:58:08.585359  428960 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 20:58:08.595321  428960 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 20:58:08.616903  428960 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 20:58:08.628011  428960 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 20:58:08.643503  428960 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 20:58:08.643563  428960 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 20:58:08.653144  428960 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:58:08.667269  428960 kubeadm.go:390] StartCluster: {Name:stopped-upgrade-20210813205520-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName
:stopped-upgrade-20210813205520-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.169 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:58:08.667418  428960 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:58:08.667533  428960 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:58:08.706592  428960 cri.go:76] found id: "011102c3b344c899a5bebec2b9eabbfd5c46ff9d066de5a22f8565286e749eaa"
	I0813 20:58:08.706630  428960 cri.go:76] found id: "42ef351e6138ddb6202d822d4f338b94da8a80a9f5d46559b2b57f8d11823a4d"
	I0813 20:58:08.706637  428960 cri.go:76] found id: "2e70cdbbe555488f2b5d63a97e718a9df9a22b15dbd7819ed90ffac142e83880"
	I0813 20:58:08.706643  428960 cri.go:76] found id: "40d29042a0eae2d33befba63432abca0ed470881fc1c0006ce428d516fd26ac2"
	I0813 20:58:08.706648  428960 cri.go:76] found id: ""
	I0813 20:58:08.706708  428960 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:58:08.734895  428960 cri.go:103] JSON = null
	W0813 20:58:08.735010  428960 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 4
	I0813 20:58:08.735114  428960 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:58:08.751584  428960 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:58:08.751605  428960 kubeadm.go:600] restartCluster start
	I0813 20:58:08.751651  428960 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:58:08.766823  428960 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:08.768043  428960 kubeconfig.go:117] verify returned: extract IP: "stopped-upgrade-20210813205520-393438" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:58:08.768402  428960 kubeconfig.go:128] "stopped-upgrade-20210813205520-393438" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:58:08.769329  428960 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:08.770404  428960 kapi.go:59] client config for stopped-upgrade-20210813205520-393438: &rest.Config{Host:"https://192.168.72.169:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/stopped-upgrade-20210813205520-393438/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles
/stopped-upgrade-20210813205520-393438/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:58:08.772648  428960 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:58:08.783734  428960 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -65,4 +65,10 @@
	 apiVersion: kubeproxy.config.k8s.io/v1alpha1
	 kind: KubeProxyConfiguration
	 clusterCIDR: "10.244.0.0/16"
	-metricsBindAddress: 192.168.72.169:10249
	+metricsBindAddress: 0.0.0.0:10249
	+conntrack:
	+  maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0813 20:58:08.783755  428960 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:58:08.783769  428960 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:58:08.783834  428960 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:58:08.828616  428960 cri.go:76] found id: "011102c3b344c899a5bebec2b9eabbfd5c46ff9d066de5a22f8565286e749eaa"
	I0813 20:58:08.828640  428960 cri.go:76] found id: "42ef351e6138ddb6202d822d4f338b94da8a80a9f5d46559b2b57f8d11823a4d"
	I0813 20:58:08.828647  428960 cri.go:76] found id: "2e70cdbbe555488f2b5d63a97e718a9df9a22b15dbd7819ed90ffac142e83880"
	I0813 20:58:08.828653  428960 cri.go:76] found id: "40d29042a0eae2d33befba63432abca0ed470881fc1c0006ce428d516fd26ac2"
	I0813 20:58:08.828687  428960 cri.go:76] found id: ""
	I0813 20:58:08.828695  428960 cri.go:221] Stopping containers: [011102c3b344c899a5bebec2b9eabbfd5c46ff9d066de5a22f8565286e749eaa 42ef351e6138ddb6202d822d4f338b94da8a80a9f5d46559b2b57f8d11823a4d 2e70cdbbe555488f2b5d63a97e718a9df9a22b15dbd7819ed90ffac142e83880 40d29042a0eae2d33befba63432abca0ed470881fc1c0006ce428d516fd26ac2]
	I0813 20:58:08.828790  428960 ssh_runner.go:149] Run: which crictl
	I0813 20:58:08.834796  428960 ssh_runner.go:149] Run: sudo /bin/crictl stop 011102c3b344c899a5bebec2b9eabbfd5c46ff9d066de5a22f8565286e749eaa 42ef351e6138ddb6202d822d4f338b94da8a80a9f5d46559b2b57f8d11823a4d 2e70cdbbe555488f2b5d63a97e718a9df9a22b15dbd7819ed90ffac142e83880 40d29042a0eae2d33befba63432abca0ed470881fc1c0006ce428d516fd26ac2
	I0813 20:58:08.894950  428960 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 20:58:08.930429  428960 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:58:08.954408  428960 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:58:08.954474  428960 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:58:08.967245  428960 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:58:08.967278  428960 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:58:09.352210  428960 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:58:10.045184  428960 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:58:10.487006  428960 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:58:10.674629  428960 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:58:10.855661  428960 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:58:10.855741  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:11.372138  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:11.872152  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:12.371930  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:09.192420  429419 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:58:09.212212  429419 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:58:09.239365  429419 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:09.257029  429419 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:09.257090  429419 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:09.274861  429419 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:58:09.296691  429419 kubeadm.go:390] StartCluster: {Name:pause-20210813205520-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Cl
usterName:pause-20210813205520-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:58:09.296799  429419 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:58:09.296859  429419 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:58:09.414812  429419 cri.go:76] found id: "1bba0d6deb03392a9c2a729aa9c03a18c3e1586cd458a1f081392f4b04d0ae62"
	I0813 20:58:09.414841  429419 cri.go:76] found id: "63c0cc1fc4c0cb78fac8fe29e80eed8b43fa6762ce189d85564911aed6114ba0"
	I0813 20:58:09.414848  429419 cri.go:76] found id: "698bbea7ce6e9ce2ff33d763621c6d0ae027c7205d816ea431cafc6e045b6889"
	I0813 20:58:09.414854  429419 cri.go:76] found id: "df02c38abac90e1bfb1eaa8433ba9faac330d654e786d0c41901507b55d0c418"
	I0813 20:58:09.414858  429419 cri.go:76] found id: "68bad432830642a2624a04015efd233270944ea918f0f82217367834481cc3a8"
	I0813 20:58:09.414864  429419 cri.go:76] found id: "11c2753c9a8a79ebfb2fe156a698be51aed9e9d6ac5dfc0af27d0a4822c7d016"
	I0813 20:58:09.414869  429419 cri.go:76] found id: ""
	I0813 20:58:09.414918  429419 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:58:09.484312  429419 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","pid":4374,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf/rootfs","created":"2021-08-13T20:58:09.064644692Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210813205520-393438_86a000e5c08d32d80b2fd4e89cd34dd1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94","pid":4269,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/47e050012dbca19a38705743976
e702aa5815af3e39eaebbfe81753ef825ae94","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94/rootfs","created":"2021-08-13T20:58:08.848304205Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-mlf5c_c0812228-e936-4bfa-9fbb-a4d0707f2a63"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872","pid":4244,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872/rootfs","created":"2021-08-13T20:58:08.637074413Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kuberne
tes.cri.sandbox-id":"53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210813205520-393438_469cea0375ae276925a50e4dde7e4ace"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","pid":4366,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22/rootfs","created":"2021-08-13T20:58:09.044079666Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210813205520-393438_36ca0d21ef43020c8f018e62049ff15f"},"owner":"root"},{"ociVersion":"1.
0.2-dev","id":"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","pid":4318,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1/rootfs","created":"2021-08-13T20:58:08.900909328Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813205520-393438_81d9f8c777d9fb26ff8b7d9c93d26d5e"},"owner":"root"}]
	I0813 20:58:09.484468  429419 cri.go:113] list returned 5 containers
	I0813 20:58:09.484480  429419 cri.go:116] container: {ID:3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf Status:created}
	I0813 20:58:09.484495  429419 cri.go:118] skipping 3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf - not in ps
	I0813 20:58:09.484502  429419 cri.go:116] container: {ID:47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94 Status:created}
	I0813 20:58:09.484509  429419 cri.go:118] skipping 47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94 - not in ps
	I0813 20:58:09.484519  429419 cri.go:116] container: {ID:53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872 Status:running}
	I0813 20:58:09.484525  429419 cri.go:118] skipping 53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872 - not in ps
	I0813 20:58:09.484530  429419 cri.go:116] container: {ID:a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22 Status:created}
	I0813 20:58:09.484537  429419 cri.go:118] skipping a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22 - not in ps
	I0813 20:58:09.484542  429419 cri.go:116] container: {ID:ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1 Status:created}
	I0813 20:58:09.484548  429419 cri.go:118] skipping ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1 - not in ps
	I0813 20:58:09.484595  429419 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:58:09.502320  429419 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:58:09.502345  429419 kubeadm.go:600] restartCluster start
	I0813 20:58:09.502401  429419 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:58:09.516916  429419 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:09.518267  429419 kubeconfig.go:93] found "pause-20210813205520-393438" server: "https://192.168.61.151:8443"
	I0813 20:58:09.519116  429419 kapi.go:59] client config for pause-20210813205520-393438: &rest.Config{Host:"https://192.168.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-2021081320552
0-393438/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:58:09.521291  429419 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:58:09.539031  429419 api_server.go:164] Checking apiserver status ...
	I0813 20:58:09.539094  429419 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:58:09.577747  429419 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:09.778007  429419 api_server.go:164] Checking apiserver status ...
	I0813 20:58:09.778103  429419 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:58:09.805252  429419 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:09.978615  429419 api_server.go:164] Checking apiserver status ...
	I0813 20:58:09.978710  429419 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:58:10.017795  429419 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:10.178120  429419 api_server.go:164] Checking apiserver status ...
	I0813 20:58:10.178226  429419 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:58:10.197283  429419 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:10.378632  429419 api_server.go:164] Checking apiserver status ...
	I0813 20:58:10.378712  429419 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:58:10.392453  429419 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:10.578714  429419 api_server.go:164] Checking apiserver status ...
	I0813 20:58:10.578807  429419 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:58:10.609275  429419 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:10.778626  429419 api_server.go:164] Checking apiserver status ...
	I0813 20:58:10.778722  429419 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:58:10.824727  429419 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:10.977941  429419 api_server.go:164] Checking apiserver status ...
	I0813 20:58:10.978056  429419 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:58:11.030490  429419 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:11.178750  429419 api_server.go:164] Checking apiserver status ...
	I0813 20:58:11.178845  429419 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 20:58:11.213643  429419 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:11.377841  429419 api_server.go:164] Checking apiserver status ...
	I0813 20:58:11.377911  429419 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:11.408459  429419 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/4590/cgroup
	I0813 20:58:11.437742  429419 api_server.go:180] apiserver freezer: "3:freezer:/kubepods/burstable/pod36ca0d21ef43020c8f018e62049ff15f/1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c"
	I0813 20:58:11.437839  429419 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod36ca0d21ef43020c8f018e62049ff15f/1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c/freezer.state
	I0813 20:58:11.458559  429419 api_server.go:202] freezer state: "THAWED"
	I0813 20:58:11.458595  429419 api_server.go:239] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I0813 20:58:11.459507  429419 api_server.go:255] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": dial tcp 192.168.61.151:8443: connect: connection refused
	I0813 20:58:11.459559  429419 retry.go:31] will retry after 270.570007ms: state is "Stopped"
	I0813 20:58:11.731148  429419 api_server.go:239] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I0813 20:58:10.371660  429159 containerd.go:546] Took 4.866115 seconds to copy over tarball
	I0813 20:58:10.371742  429159 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:58:11.294190  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5 | grep 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562": (5.931928251s)
	I0813 20:58:11.294250  429197 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0813 20:58:11.294286  429197 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:58:11.294331  429197 ssh_runner.go:149] Run: which crictl
	I0813 20:58:11.294190  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0 | grep b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080": (6.2423375s)
	I0813 20:58:11.294384  429197 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.20.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0813 20:58:11.294430  429197 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0813 20:58:11.294474  429197 ssh_runner.go:149] Run: which crictl
	I0813 20:58:11.532112  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.4 | grep 86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4": (6.05159775s)
	I0813 20:58:11.532176  429197 cache_images.go:106] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0813 20:58:11.532220  429197 cri.go:205] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:58:11.532271  429197 ssh_runner.go:149] Run: which crictl
	I0813 20:58:11.532387  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0 | grep bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16": (5.950435015s)
	I0813 20:58:11.532418  429197 cache_images.go:106] "k8s.gcr.io/coredns:1.7.0" needs transfer: "k8s.gcr.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0813 20:58:11.532440  429197 cri.go:205] Removing image: k8s.gcr.io/coredns:1.7.0
	I0813 20:58:11.532467  429197 ssh_runner.go:149] Run: which crictl
	I0813 20:58:11.532540  429197 ssh_runner.go:189] Completed: which crictl: (4.065553664s)
	I0813 20:58:11.532570  429197 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/pause:3.2
	I0813 20:58:11.732719  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.1.0 | grep 9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db": (3.960446224s)
	I0813 20:58:11.732792  429197 cache_images.go:106] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0813 20:58:11.732831  429197 cri.go:205] Removing image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:58:11.732888  429197 ssh_runner.go:149] Run: which crictl
	I0813 20:58:12.177573  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.13-0 | grep 0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934": (3.766948778s)
	I0813 20:58:12.177624  429197 cache_images.go:106] "k8s.gcr.io/etcd:3.4.13-0" needs transfer: "k8s.gcr.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0813 20:58:12.177658  429197 cri.go:205] Removing image: k8s.gcr.io/etcd:3.4.13-0
	I0813 20:58:12.177665  429197 ssh_runner.go:189] Completed: which crictl: (2.012165766s)
	I0813 20:58:12.177705  429197 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.20.0
	I0813 20:58:12.177708  429197 ssh_runner.go:149] Run: which crictl
	I0813 20:58:12.177743  429197 ssh_runner.go:189] Completed: which crictl: (2.012535427s)
	I0813 20:58:12.177765  429197 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.20.0
	I0813 20:58:12.177775  429197 ssh_runner.go:189] Completed: which crictl: (2.012407159s)
	I0813 20:58:12.177798  429197 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.20.0
	I0813 20:58:12.177837  429197 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.20.0
	I0813 20:58:12.177859  429197 ssh_runner.go:149] Run: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:58:12.177905  429197 ssh_runner.go:149] Run: sudo /bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 20:58:12.177938  429197 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I0813 20:58:12.178028  429197 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2
	I0813 20:58:12.178104  429197 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/coredns:1.7.0
	I0813 20:58:12.178169  429197 ssh_runner.go:149] Run: sudo /bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0
	I0813 20:58:12.735766  429197 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0
	I0813 20:58:12.735919  429197 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0
	I0813 20:58:12.736125  429197 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 20:58:12.736181  429197 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:58:12.736229  429197 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0
	I0813 20:58:12.736278  429197 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I0813 20:58:12.736344  429197 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0
	I0813 20:58:12.736379  429197 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4
	I0813 20:58:12.736424  429197 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.20.0
	I0813 20:58:12.736429  429197 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:58:12.749370  429197 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/etcd:3.4.13-0
	I0813 20:58:12.749474  429197 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0
	I0813 20:58:12.749550  429197 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.7.0
	I0813 20:58:12.749625  429197 ssh_runner.go:306] existence check for /var/lib/minikube/images/pause_3.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.2': No such file or directory
	I0813 20:58:12.749646  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (325632 bytes)
	I0813 20:58:12.749747  429197 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0
	I0813 20:58:12.749816  429197 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.20.0
	I0813 20:58:12.749892  429197 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0
	I0813 20:58:12.749945  429197 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0813 20:58:12.750119  429197 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0813 20:58:12.750142  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (78078976 bytes)
	I0813 20:58:12.769954  429197 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0813 20:58:12.769986  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (10569216 bytes)
	I0813 20:58:12.770011  429197 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.20.0': No such file or directory
	I0813 20:58:12.770044  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 --> /var/lib/minikube/images/kube-controller-manager_v1.20.0 (29364736 bytes)
	I0813 20:58:12.796017  429197 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.20.0': No such file or directory
	I0813 20:58:12.796065  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 --> /var/lib/minikube/images/kube-proxy_v1.20.0 (49545216 bytes)
	I0813 20:58:12.796113  429197 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0813 20:58:12.796130  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (17437696 bytes)
	I0813 20:58:12.901221  429197 containerd.go:280] Loading image: /var/lib/minikube/images/pause_3.2
	I0813 20:58:12.901287  429197 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.2
	I0813 20:58:12.872540  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:15.372139  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:15.872056  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:16.372598  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:16.872667  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:17.372116  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:16.732502  429419 api_server.go:255] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 20:58:16.732572  429419 retry.go:31] will retry after 302.767842ms: state is "Stopped"
	I0813 20:58:17.035841  429419 api_server.go:239] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I0813 20:58:20.129718  429197 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/etcd:3.4.13-0: (7.380304253s)
	I0813 20:58:20.129750  429197 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0
	I0813 20:58:20.129841  429197 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.13-0
	I0813 20:58:20.129932  429197 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.7.0: (7.380365665s)
	I0813 20:58:20.129955  429197 ssh_runner.go:306] existence check for /var/lib/minikube/images/coredns_1.7.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.7.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.7.0': No such file or directory
	I0813 20:58:20.129967  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 --> /var/lib/minikube/images/coredns_1.7.0 (16093184 bytes)
	I0813 20:58:20.130035  429197 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.20.0: (7.380204148s)
	I0813 20:58:20.130049  429197 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.20.0': No such file or directory
	I0813 20:58:20.130061  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 --> /var/lib/minikube/images/kube-apiserver_v1.20.0 (30411776 bytes)
	I0813 20:58:20.130116  429197 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0: (7.380161291s)
	I0813 20:58:20.130130  429197 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.20.0': No such file or directory
	I0813 20:58:20.130139  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 --> /var/lib/minikube/images/kube-scheduler_v1.20.0 (14016512 bytes)
	I0813 20:58:20.535154  429197 ssh_runner.go:306] existence check for /var/lib/minikube/images/etcd_3.4.13-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.13-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.13-0': No such file or directory
	I0813 20:58:20.535366  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 --> /var/lib/minikube/images/etcd_3.4.13-0 (98416128 bytes)
	I0813 20:58:20.535285  429197 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.2: (7.633975714s)
	I0813 20:58:20.535487  429197 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.2 from cache
	I0813 20:58:20.535521  429197 containerd.go:280] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:58:20.535567  429197 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0813 20:58:17.872369  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:18.872811  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:19.372943  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:19.872402  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:20.372393  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:20.872714  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:21.371855  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:21.872551  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:22.372784  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:20.212718  429419 api_server.go:265] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:58:20.212769  429419 retry.go:31] will retry after 375.905761ms: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:58:20.588942  429419 api_server.go:239] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I0813 20:58:20.606872  429419 api_server.go:265] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:58:20.606922  429419 retry.go:31] will retry after 533.892352ms: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:58:21.141670  429419 api_server.go:239] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I0813 20:58:21.155734  429419 api_server.go:265] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:58:21.156193  429419 retry.go:31] will retry after 477.7945ms: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:58:21.634331  429419 api_server.go:239] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I0813 20:58:23.372208  429419 api_server.go:265] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:58:23.372258  429419 retry.go:31] will retry after 631.911946ms: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:58:24.004585  429419 api_server.go:239] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I0813 20:58:24.014746  429419 api_server.go:265] https://192.168.61.151:8443/healthz returned 200:
	ok
	I0813 20:58:24.047913  429419 system_pods.go:86] 6 kube-system pods found
	I0813 20:58:24.047950  429419 system_pods.go:89] "coredns-558bd4d5db-jzmnb" [ea00ae4c-f4d9-414c-8762-6314a96c8a06] Running
	I0813 20:58:24.047965  429419 system_pods.go:89] "etcd-pause-20210813205520-393438" [c0f74993-053a-4721-89b9-a9f01c83cb4e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0813 20:58:24.047978  429419 system_pods.go:89] "kube-apiserver-pause-20210813205520-393438" [1625a41a-5bb8-43ff-b2ca-fc5a2e96907d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0813 20:58:24.047995  429419 system_pods.go:89] "kube-controller-manager-pause-20210813205520-393438" [203329cf-5319-4902-8b08-c8ce7d0adc7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 20:58:24.048006  429419 system_pods.go:89] "kube-proxy-mlf5c" [c0812228-e936-4bfa-9fbb-a4d0707f2a63] Running
	I0813 20:58:24.048017  429419 system_pods.go:89] "kube-scheduler-pause-20210813205520-393438" [a931b4e6-b130-4886-a7dc-604f54907b2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 20:58:24.049432  429419 api_server.go:139] control plane version: v1.21.3
	I0813 20:58:24.049458  429419 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.61.151
	I0813 20:58:24.049470  429419 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0813 20:58:24.049476  429419 kubeadm.go:604] restartCluster took 14.547125024s
	I0813 20:58:24.049482  429419 kubeadm.go:392] StartCluster complete in 14.752806003s
	I0813 20:58:24.049539  429419 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:24.050179  429419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:58:24.051401  429419 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:24.052424  429419 kapi.go:59] client config for pause-20210813205520-393438: &rest.Config{Host:"https://192.168.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-2021081320552
0-393438/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:58:24.058432  429419 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210813205520-393438" rescaled to 1
	I0813 20:58:24.058497  429419 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:58:24.058519  429419 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:58:24.060612  429419 out.go:177] * Verifying Kubernetes components...
	I0813 20:58:24.058608  429419 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:58:24.060684  429419 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:58:24.058738  429419 config.go:177] Loaded profile config "pause-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:58:24.060748  429419 addons.go:59] Setting default-storageclass=true in profile "pause-20210813205520-393438"
	I0813 20:58:24.060776  429419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210813205520-393438"
	I0813 20:58:24.060734  429419 addons.go:59] Setting storage-provisioner=true in profile "pause-20210813205520-393438"
	I0813 20:58:24.060887  429419 addons.go:135] Setting addon storage-provisioner=true in "pause-20210813205520-393438"
	W0813 20:58:24.060901  429419 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:58:24.060933  429419 host.go:66] Checking if "pause-20210813205520-393438" exists ...
	I0813 20:58:24.061265  429419 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:24.061310  429419 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:24.061398  429419 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:24.061470  429419 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:24.086441  429419 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34281
	I0813 20:58:24.087621  429419 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35847
	I0813 20:58:24.088228  429419 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:24.088913  429419 main.go:130] libmachine: Using API Version  1
	I0813 20:58:24.088941  429419 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:24.089353  429419 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:24.089970  429419 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:24.090041  429419 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:24.090242  429419 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:24.090934  429419 main.go:130] libmachine: Using API Version  1
	I0813 20:58:24.090981  429419 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:24.091580  429419 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:24.091830  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetState
	I0813 20:58:24.101351  429419 kapi.go:59] client config for pause-20210813205520-393438: &rest.Config{Host:"https://192.168.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-20210813205520-393438/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/pause-2021081320552
0-393438/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:58:24.105577  429419 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41249
	I0813 20:58:24.106125  429419 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:24.106607  429419 main.go:130] libmachine: Using API Version  1
	I0813 20:58:24.106626  429419 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:24.107299  429419 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:24.107739  429419 addons.go:135] Setting addon default-storageclass=true in "pause-20210813205520-393438"
	W0813 20:58:24.107764  429419 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:58:24.107795  429419 host.go:66] Checking if "pause-20210813205520-393438" exists ...
	I0813 20:58:24.108145  429419 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:24.108189  429419 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:24.108490  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetState
	I0813 20:58:24.112086  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:24.114751  429419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:58:24.114860  429419 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:58:24.114874  429419 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:58:24.114893  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:24.122761  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:24.122790  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:24.122822  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:24.122871  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:24.123050  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:24.123128  429419 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33157
	I0813 20:58:24.123685  429419 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:24.124250  429419 main.go:130] libmachine: Using API Version  1
	I0813 20:58:24.124274  429419 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:24.124712  429419 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:24.125207  429419 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:24.125241  429419 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:24.126884  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:24.127059  429419 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813205520-393438/id_rsa Username:docker}
	I0813 20:58:24.137574  429419 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44189
	I0813 20:58:24.138079  429419 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:24.138644  429419 main.go:130] libmachine: Using API Version  1
	I0813 20:58:24.138683  429419 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:24.139150  429419 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:24.139337  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetState
	I0813 20:58:24.143020  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:58:24.143234  429419 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:58:24.143255  429419 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:58:24.143277  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:58:24.149505  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:24.150180  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:58:24.150206  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:58:24.150377  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHPort
	I0813 20:58:24.154837  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:58:24.155021  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:58:24.155158  429419 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813205520-393438/id_rsa Username:docker}
	I0813 20:58:20.770874  429159 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.399100695s)
	I0813 20:58:20.770907  429159 containerd.go:553] Took 10.399210 seconds t extract the tarball
	I0813 20:58:20.770920  429159 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 20:58:20.852683  429159 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:58:21.024685  429159 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:58:21.098779  429159 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:58:21.187863  429159 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:58:21.208917  429159 docker.go:153] disabling docker service ...
	I0813 20:58:21.208987  429159 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:58:21.227986  429159 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:58:21.245738  429159 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:58:21.450215  429159 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:58:21.637278  429159 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:58:21.650630  429159 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:58:21.667831  429159 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMSIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CgoJW3BsdWdpbnMuImlvLmNvb
nRhaW5lcmQuZ3JwYy52MS5jcmkiXQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lc10KICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmMub3B0aW9uc10KICAgICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKI
CAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:58:21.682737  429159 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:58:21.689792  429159 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:58:21.689855  429159 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:58:21.704659  429159 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:58:21.711245  429159 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:58:21.847163  429159 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:58:23.377368  429159 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (1.530163338s)
	I0813 20:58:23.377402  429159 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:58:23.377461  429159 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:58:23.385715  429159 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 20:58:24.490807  429159 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:58:24.499589  429159 start.go:413] Will wait 60s for crictl version
	I0813 20:58:24.499656  429159 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:58:24.554180  429159 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:58:24.562233  429159 ssh_runner.go:149] Run: containerd --version
	I0813 20:58:24.610748  429159 ssh_runner.go:149] Run: containerd --version
	I0813 20:58:24.696341  429159 out.go:177] * Preparing Kubernetes v1.14.0 on containerd 1.4.9 ...
	I0813 20:58:24.696402  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetIP
	I0813 20:58:24.703026  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:24.703446  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:24.703475  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:24.703837  429159 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 20:58:24.710213  429159 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:58:24.726760  429159 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0813 20:58:24.726836  429159 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:58:24.773810  429159 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:58:24.773841  429159 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:58:24.773896  429159 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:58:24.826947  429159 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:58:24.826975  429159 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:58:24.827042  429159 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:58:24.871237  429159 cni.go:93] Creating CNI manager for ""
	I0813 20:58:24.871271  429159 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:58:24.871293  429159 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:58:24.871310  429159 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.75 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210813205735-393438 NodeName:kubernetes-upgrade-20210813205735-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.75 CgroupDr
iver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:58:24.871473  429159 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.75
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-20210813205735-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.75
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.75"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20210813205735-393438
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.75:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:58:24.871576  429159 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20210813205735-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.75 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210813205735-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:58:24.871637  429159 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0813 20:58:24.882390  429159 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:58:24.882472  429159 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:58:24.891407  429159 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (627 bytes)
	I0813 20:58:24.905473  429159 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:58:24.920036  429159 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0813 20:58:24.935864  429159 ssh_runner.go:149] Run: grep 192.168.39.75	control-plane.minikube.internal$ /etc/hosts
	I0813 20:58:24.941474  429159 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:58:24.954613  429159 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438 for IP: 192.168.39.75
	I0813 20:58:24.954693  429159 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:58:24.954717  429159 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:58:24.954794  429159 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/client.key
	I0813 20:58:24.954840  429159 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/client.crt with IP's: []
	I0813 20:58:24.734703  429197 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (4.199112085s)
	I0813 20:58:24.734729  429197 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0813 20:58:24.734750  429197 containerd.go:280] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:58:24.734795  429197 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0813 20:58:24.285934  429419 node_ready.go:35] waiting up to 6m0s for node "pause-20210813205520-393438" to be "Ready" ...
	I0813 20:58:24.286205  429419 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 20:58:24.290741  429419 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:58:24.291627  429419 node_ready.go:49] node "pause-20210813205520-393438" has status "Ready":"True"
	I0813 20:58:24.291649  429419 node_ready.go:38] duration metric: took 5.67319ms waiting for node "pause-20210813205520-393438" to be "Ready" ...
	I0813 20:58:24.291661  429419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:58:24.298236  429419 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-jzmnb" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:24.309828  429419 pod_ready.go:92] pod "coredns-558bd4d5db-jzmnb" in "kube-system" namespace has status "Ready":"True"
	I0813 20:58:24.309851  429419 pod_ready.go:81] duration metric: took 11.58702ms waiting for pod "coredns-558bd4d5db-jzmnb" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:24.309864  429419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210813205520-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:24.320467  429419 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:58:25.534311  429419 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213810615s)
	I0813 20:58:25.534363  429419 main.go:130] libmachine: Making call to close driver server
	I0813 20:58:25.534375  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .Close
	I0813 20:58:25.534492  429419 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.243729186s)
	I0813 20:58:25.534544  429419 main.go:130] libmachine: Making call to close driver server
	I0813 20:58:25.534569  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .Close
	I0813 20:58:25.534756  429419 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:58:25.534789  429419 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:58:25.534814  429419 main.go:130] libmachine: Making call to close driver server
	I0813 20:58:25.534837  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .Close
	I0813 20:58:25.535076  429419 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:58:25.535091  429419 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:58:25.535101  429419 main.go:130] libmachine: Making call to close driver server
	I0813 20:58:25.535122  429419 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:58:25.535129  429419 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:58:25.535139  429419 main.go:130] libmachine: Making call to close driver server
	I0813 20:58:25.535147  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .Close
	I0813 20:58:25.535857  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | Closing plugin on server side
	I0813 20:58:25.537678  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | Closing plugin on server side
	I0813 20:58:25.537716  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | Closing plugin on server side
	I0813 20:58:25.537769  429419 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:58:25.537799  429419 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:58:25.537718  429419 main.go:130] libmachine: (pause-20210813205520-393438) Calling .Close
	I0813 20:58:25.540033  429419 main.go:130] libmachine: (pause-20210813205520-393438) DBG | Closing plugin on server side
	I0813 20:58:25.540081  429419 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:58:25.540099  429419 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:58:25.099453  429159 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/client.crt ...
	I0813 20:58:25.099491  429159 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/client.crt: {Name:mkae4d24d257c78563025918a6580ab99b0feaed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:25.100261  429159 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/client.key ...
	I0813 20:58:25.100287  429159 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/client.key: {Name:mkfa8d8f503a7d06056da439d2d73adec0800347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:25.100741  429159 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.key.c1c3c514
	I0813 20:58:25.100758  429159 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.crt.c1c3c514 with IP's: [192.168.39.75 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:58:25.316635  429159 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.crt.c1c3c514 ...
	I0813 20:58:25.316682  429159 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.crt.c1c3c514: {Name:mk0ddc88b2eae04f840601f11a91883e44b6c00a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:25.317576  429159 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.key.c1c3c514 ...
	I0813 20:58:25.317607  429159 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.key.c1c3c514: {Name:mk1da4a4675302206daa112b196fa85d8764a1bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:25.318247  429159 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.crt.c1c3c514 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.crt
	I0813 20:58:25.318344  429159 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.key.c1c3c514 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.key
	I0813 20:58:25.318422  429159 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/proxy-client.key
	I0813 20:58:25.318437  429159 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/proxy-client.crt with IP's: []
	I0813 20:58:25.449177  429159 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/proxy-client.crt ...
	I0813 20:58:25.449231  429159 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/proxy-client.crt: {Name:mk6f31139e78ecf9d1a0fff92b8da6a65ce6ed9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:25.471285  429159 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/proxy-client.key ...
	I0813 20:58:25.471329  429159 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/proxy-client.key: {Name:mkc7202e4bb67118a2e10bfb24a71a22b8a58801 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:25.472144  429159 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 20:58:25.472203  429159 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 20:58:25.472222  429159 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:58:25.472254  429159 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:58:25.472287  429159 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:58:25.472321  429159 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:58:25.472424  429159 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 20:58:25.473721  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:58:25.499280  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:58:25.520376  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:58:25.545225  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:58:25.571406  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:58:25.593463  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:58:25.615334  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:58:25.635664  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 20:58:25.657451  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:58:25.678614  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 20:58:25.706563  429159 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 20:58:25.734769  429159 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:58:25.757416  429159 ssh_runner.go:149] Run: openssl version
	I0813 20:58:25.765477  429159 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:58:25.774945  429159 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:25.780945  429159 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:25.780994  429159 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:25.788946  429159 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:58:25.798444  429159 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 20:58:25.810230  429159 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 20:58:25.815899  429159 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 20:58:25.815948  429159 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 20:58:25.824003  429159 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 20:58:25.833913  429159 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 20:58:25.844086  429159 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 20:58:25.850227  429159 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 20:58:25.850313  429159 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 20:58:25.858336  429159 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:58:25.868532  429159 kubeadm.go:390] StartCluster: {Name:kubernetes-upgrade-20210813205735-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.14.0 ClusterName:kubernetes-upgrade-20210813205735-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:58:25.868629  429159 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:58:25.868681  429159 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:58:25.921567  429159 cri.go:76] found id: ""
	I0813 20:58:25.921655  429159 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:58:25.932828  429159 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:58:25.945134  429159 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:58:25.953921  429159 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:58:25.953967  429159 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0813 20:58:22.872137  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:23.372637  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:23.872522  428960 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:23.914265  428960 api_server.go:70] duration metric: took 13.058602066s to wait for apiserver process to appear ...
	I0813 20:58:23.914297  428960 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:58:23.914309  428960 api_server.go:239] Checking apiserver healthz at https://192.168.72.169:8443/healthz ...
	I0813 20:58:23.915203  428960 api_server.go:255] stopped: https://192.168.72.169:8443/healthz: Get "https://192.168.72.169:8443/healthz": dial tcp 192.168.72.169:8443: connect: connection refused
	I0813 20:58:24.416080  428960 api_server.go:239] Checking apiserver healthz at https://192.168.72.169:8443/healthz ...
	I0813 20:58:25.542262  429419 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0813 20:58:25.542283  429419 addons.go:344] enableAddons completed in 1.483681724s
	I0813 20:58:26.334738  429419 pod_ready.go:102] pod "etcd-pause-20210813205520-393438" in "kube-system" namespace has status "Ready":"False"
	I0813 20:58:26.828297  429419 pod_ready.go:92] pod "etcd-pause-20210813205520-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 20:58:26.828324  429419 pod_ready.go:81] duration metric: took 2.518451386s waiting for pod "etcd-pause-20210813205520-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:26.828338  429419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210813205520-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:26.836849  429419 pod_ready.go:92] pod "kube-apiserver-pause-20210813205520-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 20:58:26.836871  429419 pod_ready.go:81] duration metric: took 8.524124ms waiting for pod "kube-apiserver-pause-20210813205520-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:26.836886  429419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210813205520-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:27.862455  429419 pod_ready.go:92] pod "kube-controller-manager-pause-20210813205520-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 20:58:27.862481  429419 pod_ready.go:81] duration metric: took 1.025585866s waiting for pod "kube-controller-manager-pause-20210813205520-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:27.862494  429419 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mlf5c" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:27.870208  429419 pod_ready.go:92] pod "kube-proxy-mlf5c" in "kube-system" namespace has status "Ready":"True"
	I0813 20:58:27.870232  429419 pod_ready.go:81] duration metric: took 7.73045ms waiting for pod "kube-proxy-mlf5c" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:27.870245  429419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210813205520-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:27.891134  429419 pod_ready.go:92] pod "kube-scheduler-pause-20210813205520-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 20:58:27.891160  429419 pod_ready.go:81] duration metric: took 20.905213ms waiting for pod "kube-scheduler-pause-20210813205520-393438" in "kube-system" namespace to be "Ready" ...
	I0813 20:58:27.891170  429419 pod_ready.go:38] duration metric: took 3.599496172s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 20:58:27.891188  429419 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:58:27.891241  429419 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:27.905112  429419 api_server.go:70] duration metric: took 3.846582491s to wait for apiserver process to appear ...
	I0813 20:58:27.905137  429419 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:58:27.905149  429419 api_server.go:239] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I0813 20:58:27.914448  429419 api_server.go:265] https://192.168.61.151:8443/healthz returned 200:
	ok
	I0813 20:58:27.916188  429419 api_server.go:139] control plane version: v1.21.3
	I0813 20:58:27.916211  429419 api_server.go:129] duration metric: took 11.066889ms to wait for apiserver health ...
	I0813 20:58:27.916220  429419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:58:28.094936  429419 system_pods.go:59] 7 kube-system pods found
	I0813 20:58:28.094972  429419 system_pods.go:61] "coredns-558bd4d5db-jzmnb" [ea00ae4c-f4d9-414c-8762-6314a96c8a06] Running
	I0813 20:58:28.094981  429419 system_pods.go:61] "etcd-pause-20210813205520-393438" [c0f74993-053a-4721-89b9-a9f01c83cb4e] Running
	I0813 20:58:28.094988  429419 system_pods.go:61] "kube-apiserver-pause-20210813205520-393438" [1625a41a-5bb8-43ff-b2ca-fc5a2e96907d] Running
	I0813 20:58:28.094995  429419 system_pods.go:61] "kube-controller-manager-pause-20210813205520-393438" [203329cf-5319-4902-8b08-c8ce7d0adc7a] Running
	I0813 20:58:28.095002  429419 system_pods.go:61] "kube-proxy-mlf5c" [c0812228-e936-4bfa-9fbb-a4d0707f2a63] Running
	I0813 20:58:28.095009  429419 system_pods.go:61] "kube-scheduler-pause-20210813205520-393438" [a931b4e6-b130-4886-a7dc-604f54907b2f] Running
	I0813 20:58:28.095015  429419 system_pods.go:61] "storage-provisioner" [99920d7c-bb8d-4c65-bf44-b56f23a40e53] Running
	I0813 20:58:28.095024  429419 system_pods.go:74] duration metric: took 178.797733ms to wait for pod list to return data ...
	I0813 20:58:28.095037  429419 default_sa.go:34] waiting for default service account to be created ...
	I0813 20:58:28.293159  429419 default_sa.go:45] found service account: "default"
	I0813 20:58:28.293188  429419 default_sa.go:55] duration metric: took 198.132212ms for default service account to be created ...
	I0813 20:58:28.293201  429419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 20:58:28.493886  429419 system_pods.go:86] 7 kube-system pods found
	I0813 20:58:28.493920  429419 system_pods.go:89] "coredns-558bd4d5db-jzmnb" [ea00ae4c-f4d9-414c-8762-6314a96c8a06] Running
	I0813 20:58:28.493928  429419 system_pods.go:89] "etcd-pause-20210813205520-393438" [c0f74993-053a-4721-89b9-a9f01c83cb4e] Running
	I0813 20:58:28.493935  429419 system_pods.go:89] "kube-apiserver-pause-20210813205520-393438" [1625a41a-5bb8-43ff-b2ca-fc5a2e96907d] Running
	I0813 20:58:28.493942  429419 system_pods.go:89] "kube-controller-manager-pause-20210813205520-393438" [203329cf-5319-4902-8b08-c8ce7d0adc7a] Running
	I0813 20:58:28.493948  429419 system_pods.go:89] "kube-proxy-mlf5c" [c0812228-e936-4bfa-9fbb-a4d0707f2a63] Running
	I0813 20:58:28.493954  429419 system_pods.go:89] "kube-scheduler-pause-20210813205520-393438" [a931b4e6-b130-4886-a7dc-604f54907b2f] Running
	I0813 20:58:28.493960  429419 system_pods.go:89] "storage-provisioner" [99920d7c-bb8d-4c65-bf44-b56f23a40e53] Running
	I0813 20:58:28.493969  429419 system_pods.go:126] duration metric: took 200.762018ms to wait for k8s-apps to be running ...
	I0813 20:58:28.493979  429419 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 20:58:28.494028  429419 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:58:28.511568  429419 system_svc.go:56] duration metric: took 17.580044ms WaitForService to wait for kubelet.
	I0813 20:58:28.511598  429419 kubeadm.go:547] duration metric: took 4.453071155s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 20:58:28.511622  429419 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:58:28.692948  429419 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:58:28.692983  429419 node_conditions.go:123] node cpu capacity is 2
	I0813 20:58:28.693007  429419 node_conditions.go:105] duration metric: took 181.379058ms to run NodePressure ...
	I0813 20:58:28.693024  429419 start.go:231] waiting for startup goroutines ...
	I0813 20:58:28.787756  429419 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 20:58:28.790013  429419 out.go:177] * Done! kubectl is now configured to use "pause-20210813205520-393438" cluster and "default" namespace by default
	I0813 20:58:27.088638  429159 out.go:204]   - Generating certificates and keys ...
	I0813 20:58:26.987192  429197 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.4: (2.252367187s)
	I0813 20:58:26.987224  429197 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache
	I0813 20:58:26.987260  429197 containerd.go:280] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I0813 20:58:26.987309  429197 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I0813 20:58:28.378335  429197 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.20.0: (1.390986894s)
	I0813 20:58:28.378374  429197 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 from cache
	I0813 20:58:28.378404  429197 containerd.go:280] Loading image: /var/lib/minikube/images/kube-proxy_v1.20.0
	I0813 20:58:28.378489  429197 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.20.0
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	33fae69af6bcf       6e38f40d628db       5 seconds ago        Running             storage-provisioner       0                   76aee79f917be
	b6372d9d76486       296a6d5035e2d       20 seconds ago       Running             coredns                   1                   cfc4c8785e479
	afabb5f130410       0369cf4303ffd       20 seconds ago       Running             etcd                      1                   3f41ec729ef71
	57f3f32f280d8       bc2bb319a7038       21 seconds ago       Running             kube-controller-manager   1                   ce1823a3db17a
	1053b5b4ba3ab       3d174f00aa39e       21 seconds ago       Running             kube-apiserver            1                   a655f217cf1c5
	0d1a942c8b8c2       adb2816ea823a       22 seconds ago       Running             kube-proxy                1                   47e050012dbca
	1d84b053549cf       6be0dc1302e30       22 seconds ago       Running             kube-scheduler            1                   53f314c6cf963
	1bba0d6deb033       adb2816ea823a       About a minute ago   Exited              kube-proxy                0                   3f6f239c2851f
	63c0cc1fc4c0c       296a6d5035e2d       About a minute ago   Exited              coredns                   0                   b1f1f31f28005
	698bbea7ce6e9       6be0dc1302e30       About a minute ago   Exited              kube-scheduler            0                   5a66336a35add
	df02c38abac90       0369cf4303ffd       About a minute ago   Exited              etcd                      0                   4cf745987f602
	68bad43283064       bc2bb319a7038       About a minute ago   Exited              kube-controller-manager   0                   5340b4aa5ca39
	11c2753c9a8a7       3d174f00aa39e       About a minute ago   Exited              kube-apiserver            0                   304b611d719ea
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:55:52 UTC, end at Fri 2021-08-13 20:58:32 UTC. --
	Aug 13 20:58:10 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:10.602825484Z" level=info msg="StartContainer for \"1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c\""
	Aug 13 20:58:10 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:10.630050995Z" level=info msg="CreateContainer within sandbox \"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
	Aug 13 20:58:10 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:10.674267060Z" level=info msg="StartContainer for \"1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c\" returns successfully"
	Aug 13 20:58:10 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:10.848732655Z" level=info msg="CreateContainer within sandbox \"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849\""
	Aug 13 20:58:10 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:10.859627067Z" level=info msg="StartContainer for \"57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.078142311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-pause-20210813205520-393438,Uid:86a000e5c08d32d80b2fd4e89cd34dd1,Namespace:kube-system,Attempt:1,} returns sandbox id \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.145266794Z" level=info msg="CreateContainer within sandbox \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\" for container &ContainerMetadata{Name:etcd,Attempt:1,}"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.321521915Z" level=info msg="StartContainer for \"1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c\" returns successfully"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.349622186Z" level=info msg="CreateContainer within sandbox \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\" for &ContainerMetadata{Name:etcd,Attempt:1,} returns container id \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.353268082Z" level=info msg="StartContainer for \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.376810925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-jzmnb,Uid:ea00ae4c-f4d9-414c-8762-6314a96c8a06,Namespace:kube-system,Attempt:1,} returns sandbox id \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.451595226Z" level=info msg="CreateContainer within sandbox \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.633919582Z" level=info msg="CreateContainer within sandbox \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.635324605Z" level=info msg="StartContainer for \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.770314446Z" level=info msg="StartContainer for \"57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849\" returns successfully"
	Aug 13 20:58:12 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:12.016041628Z" level=info msg="StartContainer for \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\" returns successfully"
	Aug 13 20:58:12 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:12.229109322Z" level=info msg="StartContainer for \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\" returns successfully"
	Aug 13 20:58:15 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:15.472167045Z" level=info msg="StartContainer for \"0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5\" returns successfully"
	Aug 13 20:58:25 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:25.856093567Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:99920d7c-bb8d-4c65-bf44-b56f23a40e53,Namespace:kube-system,Attempt:0,}"
	Aug 13 20:58:25 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:25.901091488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a pid=4886
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.481756294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:99920d7c-bb8d-4c65-bf44-b56f23a40e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.492027606Z" level=info msg="CreateContainer within sandbox \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.607213854Z" level=info msg="CreateContainer within sandbox \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.614295374Z" level=info msg="StartContainer for \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.876068804Z" level=info msg="StartContainer for \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\" returns successfully"
	
	* 
	* ==> coredns [63c0cc1fc4c0cb78fac8fe29e80eed8b43fa6762ce189d85564911aed6114ba0] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 6b95276539722f40f4545af91578505c
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0813 20:57:49.980199       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.978) (total time: 30001ms):
	Trace[2019727887]: [30.001847928s] [30.001847928s] END
	E0813 20:57:49.980279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:57:49.980655       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.975) (total time: 30005ms):
	Trace[939984059]: [30.00501838s] [30.00501838s] END
	E0813 20:57:49.980691       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:57:49.981307       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.975) (total time: 30005ms):
	Trace[911902081]: [30.005916603s] [30.005916603s] END
	E0813 20:57:49.981521       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 6b95276539722f40f4545af91578505c
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	E0813 20:58:20.310855       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	*                 "trace_clock=local"
	              on the kernel command line
	[  +0.000017] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.863604] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.032050] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.917916] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1722 comm=systemd-network
	[  +2.669268] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.335717] vboxguest: loading out-of-tree module taints kernel.
	[  +0.008488] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 20:56] systemd-fstab-generator[2101]: Ignoring "noauto" for root device
	[  +0.927578] systemd-fstab-generator[2132]: Ignoring "noauto" for root device
	[  +0.140064] systemd-fstab-generator[2146]: Ignoring "noauto" for root device
	[  +0.195734] systemd-fstab-generator[2179]: Ignoring "noauto" for root device
	[  +8.321149] systemd-fstab-generator[2386]: Ignoring "noauto" for root device
	[Aug13 20:57] systemd-fstab-generator[2823]: Ignoring "noauto" for root device
	[ +16.072552] kauditd_printk_skb: 38 callbacks suppressed
	[ +34.372009] kauditd_printk_skb: 116 callbacks suppressed
	[  +3.958113] NFSD: Unable to end grace period: -110
	[Aug13 20:58] systemd-fstab-generator[3706]: Ignoring "noauto" for root device
	[  +0.206181] systemd-fstab-generator[3719]: Ignoring "noauto" for root device
	[  +0.261980] systemd-fstab-generator[3744]: Ignoring "noauto" for root device
	[ +19.584639] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.482860] systemd-fstab-generator[4981]: Ignoring "noauto" for root device
	[  +0.846439] systemd-fstab-generator[5035]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5] <==
	* 2021-08-13 20:58:16.461857 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:5" took too long (198.960862ms) to execute
	2021-08-13 20:58:16.462013 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 " with result "range_response_count:0 size:5" took too long (199.025411ms) to execute
	2021-08-13 20:58:16.462116 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (190.42222ms) to execute
	2021-08-13 20:58:16.462337 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (179.184455ms) to execute
	2021-08-13 20:58:16.462702 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (172.711746ms) to execute
	2021-08-13 20:58:16.462925 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (170.528555ms) to execute
	2021-08-13 20:58:16.463221 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (158.293847ms) to execute
	2021-08-13 20:58:16.463747 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (158.490371ms) to execute
	2021-08-13 20:58:16.464124 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 " with result "range_response_count:0 size:5" took too long (152.464331ms) to execute
	2021-08-13 20:58:16.477058 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (151.343452ms) to execute
	2021-08-13 20:58:16.478005 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true " with result "range_response_count:0 size:5" took too long (142.028022ms) to execute
	2021-08-13 20:58:16.478939 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" limit:10000 " with result "range_response_count:0 size:5" took too long (142.259692ms) to execute
	2021-08-13 20:58:16.479721 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 " with result "range_response_count:0 size:5" took too long (129.328346ms) to execute
	2021-08-13 20:58:16.479967 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (126.882803ms) to execute
	2021-08-13 20:58:16.480303 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 " with result "range_response_count:11 size:5977" took too long (116.866258ms) to execute
	2021-08-13 20:58:16.480852 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true " with result "range_response_count:0 size:7" took too long (116.970061ms) to execute
	2021-08-13 20:58:23.354247 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/cluster-admin\" " with result "range_response_count:1 size:718" took too long (1.914180768s) to execute
	2021-08-13 20:58:23.356685 W | etcdserver: request "header:<ID:14244176716868856811 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-20210813205520-393438.169af9452389bd61\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-20210813205520-393438.169af9452389bd61\" value_size:717 lease:5020804680014080881 >> failure:<>>" with result "size:16" took too long (1.23562281s) to execute
	2021-08-13 20:58:23.370142 W | wal: sync duration of 1.250273887s, expected less than 1s
	2021-08-13 20:58:23.370676 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.152835664s) to execute
	2021-08-13 20:58:23.371565 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.728436243s) to execute
	2021-08-13 20:58:23.371769 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.847351028s) to execute
	2021-08-13 20:58:23.378962 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-jzmnb\" " with result "range_response_count:1 size:4862" took too long (671.753147ms) to execute
	2021-08-13 20:58:24.705568 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813205520-393438\" " with result "range_response_count:1 size:4394" took too long (221.501911ms) to execute
	2021-08-13 20:58:26.341296 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [df02c38abac90e1bfb1eaa8433ba9faac330d654e786d0c41901507b55d0c418] <==
	* 2021-08-13 20:56:51.867973 I | embed: serving client requests on 192.168.61.151:2379
	2021-08-13 20:56:51.875825 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:57:01.271062 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" " with result "range_response_count:0 size:5" took too long (480.2351ms) to execute
	2021-08-13 20:57:01.272131 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813205520-393438\" " with result "range_response_count:1 size:3758" took too long (875.676682ms) to execute
	2021-08-13 20:57:01.273551 W | etcdserver: read-only range request "key:\"/registry/events/default/pause-20210813205520-393438.169af930771f12dc\" " with result "range_response_count:1 size:735" took too long (792.283833ms) to execute
	2021-08-13 20:57:02.171621 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (872.818648ms) to execute
	2021-08-13 20:57:02.172160 W | etcdserver: request "header:<ID:14244176716848216677 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813205520-393438\" mod_revision:222 > success:<request_put:<key:\"/registry/minions/pause-20210813205520-393438\" value_size:3993 >> failure:<request_range:<key:\"/registry/minions/pause-20210813205520-393438\" > >>" with result "size:16" took too long (128.660032ms) to execute
	2021-08-13 20:57:02.172330 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:351" took too long (871.615956ms) to execute
	2021-08-13 20:57:02.172733 W | etcdserver: read-only range request "key:\"/registry/events/default/pause-20210813205520-393438.169af930771f2f58\" " with result "range_response_count:1 size:733" took too long (859.92991ms) to execute
	2021-08-13 20:57:02.172849 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813205520-393438\" " with result "range_response_count:0 size:5" took too long (853.236151ms) to execute
	2021-08-13 20:57:09.290631 W | etcdserver: request "header:<ID:14244176716848216792 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813205520-393438\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/pause-20210813205520-393438\" value_size:3277 >> failure:<>>" with result "size:5" took too long (472.704737ms) to execute
	2021-08-13 20:57:09.291659 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/pause-20210813205520-393438\" " with result "range_response_count:0 size:5" took too long (897.879132ms) to execute
	2021-08-13 20:57:09.298807 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210813205520-393438\" " with result "range_response_count:1 size:4986" took too long (528.421007ms) to execute
	2021-08-13 20:57:09.299124 W | etcdserver: read-only range request "key:\"/registry/csinodes/pause-20210813205520-393438\" " with result "range_response_count:1 size:656" took too long (894.254864ms) to execute
	2021-08-13 20:57:13.314052 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:210" took too long (127.466898ms) to execute
	2021-08-13 20:57:13.314663 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (132.387511ms) to execute
	2021-08-13 20:57:16.343764 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:20.988739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:30.989151 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:39.442816 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:422" took too long (120.094417ms) to execute
	2021-08-13 20:57:40.988900 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:50.989064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:58:00.244154 W | etcdserver: request "header:<ID:14244176716848217456 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.151\" mod_revision:483 > success:<request_put:<key:\"/registry/masterleases/192.168.61.151\" value_size:69 lease:5020804679993441646 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.151\" > >>" with result "size:16" took too long (162.220853ms) to execute
	2021-08-13 20:58:00.245134 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (881.389444ms) to execute
	2021-08-13 20:58:00.989778 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:58:42 up 2 min,  0 users,  load average: 1.91, 0.94, 0.37
	Linux pause-20210813205520-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c] <==
	* I0813 20:58:20.351321       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 20:58:20.372737       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0813 20:58:20.375890       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 20:58:20.387225       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 20:58:20.401103       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:58:20.403283       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 20:58:20.407207       1 cache.go:39] Caches are synced for autoregister controller
	I0813 20:58:20.410957       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 20:58:21.065658       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 20:58:21.066635       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 20:58:21.090819       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 20:58:23.358425       1 trace.go:205] Trace[1442514083]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:58:21.628) (total time: 1729ms):
	Trace[1442514083]: ---"Object stored in database" 1729ms (20:58:00.358)
	Trace[1442514083]: [1.729557914s] [1.729557914s] END
	I0813 20:58:23.359893       1 trace.go:205] Trace[553017594]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:58:21.438) (total time: 1920ms):
	Trace[553017594]: ---"About to write a response" 1919ms (20:58:00.358)
	Trace[553017594]: [1.920866407s] [1.920866407s] END
	I0813 20:58:23.381663       1 trace.go:205] Trace[1143050190]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-jzmnb,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:58:22.699) (total time: 682ms):
	Trace[1143050190]: ---"About to write a response" 681ms (20:58:00.380)
	Trace[1143050190]: [682.310081ms] [682.310081ms] END
	I0813 20:58:25.230359       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:58:25.281700       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:58:25.373725       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 20:58:25.413105       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 20:58:25.560667       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-apiserver [11c2753c9a8a79ebfb2fe156a698be51aed9e9d6ac5dfc0af27d0a4822c7d016] <==
	* I0813 20:57:09.309542       1 trace.go:205] Trace[2046907584]: "Create" url:/api/v1/nodes,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.501) (total time: 806ms):
	Trace[2046907584]: [806.482297ms] [806.482297ms] END
	I0813 20:57:09.310802       1 trace.go:205] Trace[146959614]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.771) (total time: 538ms):
	Trace[146959614]: ---"Object stored in database" 538ms (20:57:00.310)
	Trace[146959614]: [538.954794ms] [538.954794ms] END
	I0813 20:57:09.311138       1 trace.go:205] Trace[1128950750]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-20210813205520-393438,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 541ms):
	Trace[1128950750]: ---"About to write a response" 537ms (20:57:00.307)
	Trace[1128950750]: [541.267103ms] [541.267103ms] END
	I0813 20:57:09.311248       1 trace.go:205] Trace[1268223707]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 541ms):
	Trace[1268223707]: ---"Object stored in database" 540ms (20:57:00.310)
	Trace[1268223707]: [541.971563ms] [541.971563ms] END
	I0813 20:57:09.311433       1 trace.go:205] Trace[1977445463]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.772) (total time: 538ms):
	Trace[1977445463]: ---"Object stored in database" 537ms (20:57:00.310)
	Trace[1977445463]: [538.348208ms] [538.348208ms] END
	I0813 20:57:09.321803       1 trace.go:205] Trace[494614999]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 552ms):
	Trace[494614999]: [552.453895ms] [552.453895ms] END
	I0813 20:57:09.345220       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:57:16.259955       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:57:16.380865       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:57:37.272234       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:57:37.272418       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:57:37.272507       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:58:00.246413       1 trace.go:205] Trace[1997979141]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (13-Aug-2021 20:57:59.258) (total time: 987ms):
	Trace[1997979141]: ---"Transaction committed" 984ms (20:58:00.246)
	Trace[1997979141]: [987.521712ms] [987.521712ms] END
	
	* 
	* ==> kube-controller-manager [57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849] <==
	* I0813 20:58:25.074041       1 daemon_controller.go:285] Starting daemon sets controller
	I0813 20:58:25.074050       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I0813 20:58:25.116517       1 controllermanager.go:574] Started "horizontalpodautoscaling"
	I0813 20:58:25.116556       1 horizontal.go:169] Starting HPA controller
	I0813 20:58:25.116758       1 shared_informer.go:240] Waiting for caches to sync for HPA
	E0813 20:58:25.120839       1 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	W0813 20:58:25.120857       1 controllermanager.go:566] Skipping "service"
	I0813 20:58:25.124370       1 controllermanager.go:574] Started "persistentvolume-expander"
	I0813 20:58:25.124569       1 expand_controller.go:327] Starting expand controller
	I0813 20:58:25.124579       1 shared_informer.go:240] Waiting for caches to sync for expand
	I0813 20:58:25.175876       1 controllermanager.go:574] Started "namespace"
	I0813 20:58:25.176251       1 namespace_controller.go:200] Starting namespace controller
	I0813 20:58:25.176376       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0813 20:58:25.185657       1 controllermanager.go:574] Started "serviceaccount"
	I0813 20:58:25.187325       1 serviceaccounts_controller.go:117] Starting service account controller
	I0813 20:58:25.187340       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0813 20:58:25.192151       1 controllermanager.go:574] Started "replicaset"
	I0813 20:58:25.192315       1 replica_set.go:182] Starting replicaset controller
	I0813 20:58:25.192327       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0813 20:58:25.200141       1 controllermanager.go:574] Started "bootstrapsigner"
	I0813 20:58:25.200611       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0813 20:58:25.204061       1 controllermanager.go:574] Started "cronjob"
	I0813 20:58:25.204578       1 cronjob_controllerv2.go:125] Starting cronjob controller v2
	I0813 20:58:25.204590       1 shared_informer.go:240] Waiting for caches to sync for cronjob
	I0813 20:58:25.207401       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [68bad432830642a2624a04015efd233270944ea918f0f82217367834481cc3a8] <==
	* I0813 20:57:15.593972       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:57:15.593991       1 disruption.go:371] Sending events to api server.
	I0813 20:57:15.596695       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:57:15.636700       1 shared_informer.go:247] Caches are synced for service account 
	I0813 20:57:15.652896       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:57:15.701400       1 shared_informer.go:247] Caches are synced for taint 
	I0813 20:57:15.701628       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0813 20:57:15.701702       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210813205520-393438. Assuming now as a timestamp.
	I0813 20:57:15.701748       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0813 20:57:15.701825       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0813 20:57:15.702024       1 event.go:291] "Event occurred" object="pause-20210813205520-393438" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210813205520-393438 event: Registered Node pause-20210813205520-393438 in Controller"
	I0813 20:57:15.735577       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:57:15.751667       1 shared_informer.go:247] Caches are synced for stateful set 
	I0813 20:57:15.767285       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:57:15.796364       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0813 20:57:15.847876       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:57:16.199991       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:57:16.200121       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:57:16.224599       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:57:16.277997       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:57:16.457337       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mlf5c"
	I0813 20:57:16.545672       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-fhxw7"
	I0813 20:57:16.596799       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-jzmnb"
	I0813 20:57:16.804186       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:57:16.819742       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-fhxw7"
	
	* 
	* ==> kube-proxy [0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5] <==
	* E0813 20:58:20.334846       1 node.go:161] Failed to retrieve node info: nodes "pause-20210813205520-393438" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	I0813 20:58:21.364522       1 node.go:172] Successfully retrieved node IP: 192.168.61.151
	I0813 20:58:21.365223       1 server_others.go:140] Detected node IP 192.168.61.151
	W0813 20:58:21.366125       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:58:23.461362       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:58:23.462248       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:58:23.465333       1 server_others.go:212] Using iptables Proxier.
	I0813 20:58:23.483125       1 server.go:643] Version: v1.21.3
	I0813 20:58:23.488959       1 config.go:315] Starting service config controller
	I0813 20:58:23.490323       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:58:23.490593       1 config.go:224] Starting endpoint slice config controller
	I0813 20:58:23.490606       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:58:23.512424       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:58:23.514744       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:58:23.591163       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:58:23.593313       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [1bba0d6deb03392a9c2a729aa9c03a18c3e1586cd458a1f081392f4b04d0ae62] <==
	* I0813 20:57:20.123665       1 node.go:172] Successfully retrieved node IP: 192.168.61.151
	I0813 20:57:20.123841       1 server_others.go:140] Detected node IP 192.168.61.151
	W0813 20:57:20.123909       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:57:20.180054       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:57:20.180158       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:57:20.180173       1 server_others.go:212] Using iptables Proxier.
	I0813 20:57:20.181825       1 server.go:643] Version: v1.21.3
	I0813 20:57:20.184367       1 config.go:315] Starting service config controller
	I0813 20:57:20.184561       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:57:20.184600       1 config.go:224] Starting endpoint slice config controller
	I0813 20:57:20.184604       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:57:20.203222       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:57:20.207174       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:57:20.285130       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:57:20.285144       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c] <==
	* I0813 20:58:11.830530       1 serving.go:347] Generated self-signed cert in-memory
	W0813 20:58:20.220887       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 20:58:20.224373       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 20:58:20.224624       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:58:20.224640       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:58:20.341243       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 20:58:20.343223       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:58:20.343608       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:58:20.347257       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0813 20:58:20.444874       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [698bbea7ce6e9ce2ff33d763621c6d0ae027c7205d816ea431cafc6e045b6889] <==
	* I0813 20:56:57.340096       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:56:57.373873       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:56:57.375600       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:56:57.398047       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:56:57.406392       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:56:57.418940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.424521       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:56:57.426539       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:56:57.426578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:56:57.428616       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428717       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428765       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:56:57.428811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:56:57.428897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:56:58.261670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:56:58.311937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:56:58.405804       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.463800       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:56:58.585826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.615525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:56:58.626736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.669986       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:56:58.791820       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0813 20:57:01.440271       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:55:52 UTC, end at Fri 2021-08-13 20:58:43 UTC. --
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.453957    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454194    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454410    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454727    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454757    2832 kubelet_node_status.go:457] "Unable to update node status" err="update node status exceeds retry count"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.611414    2832 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-pause-20210813205520-393438.169af943ec02b0a4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-pause-20210813205520-393438", UID:"36ca0d21ef43020c8f018e62049ff15f", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readines
s probe failed: Get \"https://192.168.61.151:8443/readyz\": dial tcp 192.168.61.151:8443: connect: connection refused", Source:v1.EventSource{Component:"kubelet", Host:"pause-20210813205520-393438"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03dd51755ca4ea4, ext:62717519917, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03dd51755ca4ea4, ext:62717519917, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.61.151:8443: connect: connection refused'(may retry after sleeping)
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.428873    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.429203    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.429890    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.430165    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.430396    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:10.430620    2832 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.430883    2832 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.632976    2832 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:11 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:11.038724    2832 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:11 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:11.294567    2832 status_manager.go:566] "Failed to get status for pod" podUID=469cea0375ae276925a50e4dde7e4ace pod="kube-system/kube-scheduler-pause-20210813205520-393438" error="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-20210813205520-393438\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:20 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:20.266431    2832 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Aug 13 20:58:20 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:20.269986    2832 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Aug 13 20:58:25 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:25.541317    2832 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:58:25 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:25.590904    2832 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw6vd\" (UniqueName: \"kubernetes.io/projected/99920d7c-bb8d-4c65-bf44-b56f23a40e53-kube-api-access-xw6vd\") pod \"storage-provisioner\" (UID: \"99920d7c-bb8d-4c65-bf44-b56f23a40e53\") "
	Aug 13 20:58:25 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:25.590979    2832 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/99920d7c-bb8d-4c65-bf44-b56f23a40e53-tmp\") pod \"storage-provisioner\" (UID: \"99920d7c-bb8d-4c65-bf44-b56f23a40e53\") "
	Aug 13 20:58:29 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:29.362225    2832 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:58:29 pause-20210813205520-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:58:29 pause-20210813205520-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:58:29 pause-20210813205520-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 90 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000328290, 0xc000000003)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000328280)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003f0480, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bcc80, 0x18e5530, 0xc0003284c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0004ceee0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004ceee0, 0x18b3d60, 0xc000311f80, 0x1, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004ceee0, 0x3b9aca00, 0x0, 0x17a0501, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0004ceee0, 0x3b9aca00, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:58:42.500616  429686 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813205520-393438 -n pause-20210813205520-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813205520-393438 -n pause-20210813205520-393438: exit status 2 (328.445481ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813205520-393438 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210813205520-393438 logs -n 25: exit status 110 (11.591217955s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210813202658-393438          | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:34 UTC | Fri, 13 Aug 2021 20:31:44 UTC |
	|         | node start m03                           |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	| stop    | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:45 UTC | Fri, 13 Aug 2021 20:34:51 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	| start   | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:34:51 UTC | Fri, 13 Aug 2021 20:40:57 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	|         | --wait=true -v=8                         |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	| -p      | multinode-20210813202658-393438          | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:58 UTC | Fri, 13 Aug 2021 20:40:59 UTC |
	|         | node delete m03                          |                                          |         |         |                               |                               |
	| -p      | multinode-20210813202658-393438          | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:00 UTC | Fri, 13 Aug 2021 20:44:04 UTC |
	|         | stop                                     |                                          |         |         |                               |                               |
	| start   | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:04 UTC | Fri, 13 Aug 2021 20:48:01 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	|         | --wait=true -v=8                         |                                          |         |         |                               |                               |
	|         | --alsologtostderr --driver=kvm2          |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| start   | -p                                       | multinode-20210813202658-393438-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:01 UTC | Fri, 13 Aug 2021 20:49:01 UTC |
	|         | multinode-20210813202658-393438-m03      |                                          |         |         |                               |                               |
	|         | --driver=kvm2                            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | multinode-20210813202658-393438-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:02 UTC | Fri, 13 Aug 2021 20:49:03 UTC |
	|         | multinode-20210813202658-393438-m03      |                                          |         |         |                               |                               |
	| delete  | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:03 UTC | Fri, 13 Aug 2021 20:49:05 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:38 UTC | Fri, 13 Aug 2021 20:52:46 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | --wait=true --preload=false              |                                          |         |         |                               |                               |
	|         | --driver=kvm2                            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:47 UTC | Fri, 13 Aug 2021 20:52:48 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | -- sudo crictl pull busybox              |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:48 UTC | Fri, 13 Aug 2021 20:53:39 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2           |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:39 UTC | Fri, 13 Aug 2021 20:53:39 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | -- sudo crictl image ls                  |                                          |         |         |                               |                               |
	| delete  | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:39 UTC | Fri, 13 Aug 2021 20:53:41 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:41 UTC | Fri, 13 Aug 2021 20:54:41 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:42 UTC | Fri, 13 Aug 2021 20:54:42 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --cancel-scheduled                       |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:55 UTC | Fri, 13 Aug 2021 20:55:02 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --schedule 5s                            |                                          |         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:20 UTC | Fri, 13 Aug 2021 20:55:20 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-containerd-20210813205520-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:21 UTC | Fri, 13 Aug 2021 20:57:33 UTC |
	|         | offline-containerd-20210813205520-393438 |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=kvm2                |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20210813205520-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:33 UTC | Fri, 13 Aug 2021 20:57:35 UTC |
	|         | offline-containerd-20210813205520-393438 |                                          |         |         |                               |                               |
	| start   | -p pause-20210813205520-393438           | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:21 UTC | Fri, 13 Aug 2021 20:57:54 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=kvm2                 |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p pause-20210813205520-393438           | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:54 UTC | Fri, 13 Aug 2021 20:58:28 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                       |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:27 UTC | Fri, 13 Aug 2021 20:58:34 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                       |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| logs    | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:34 UTC | Fri, 13 Aug 2021 20:58:35 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	| delete  | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:35 UTC | Fri, 13 Aug 2021 20:58:36 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:58:36
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:58:36.952203  429844 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:58:36.952268  429844 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:58:36.952271  429844 out.go:311] Setting ErrFile to fd 2...
	I0813 20:58:36.952274  429844 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:58:36.952377  429844 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:58:36.952620  429844 out.go:305] Setting JSON to false
	I0813 20:58:36.993458  429844 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6079,"bootTime":1628882238,"procs":193,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:58:36.993602  429844 start.go:121] virtualization: kvm guest
	I0813 20:58:36.996289  429844 out.go:177] * [force-systemd-env-20210813205836-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:58:36.998267  429844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:58:36.996433  429844 notify.go:169] Checking for updates...
	I0813 20:58:36.999665  429844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:58:37.001250  429844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:58:37.002637  429844 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:58:37.004247  429844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0813 20:58:37.004726  429844 config.go:177] Loaded profile config "kubernetes-upgrade-20210813205735-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:58:37.004843  429844 config.go:177] Loaded profile config "pause-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:58:37.004926  429844 config.go:177] Loaded profile config "running-upgrade-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:58:37.004973  429844 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:58:37.038645  429844 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 20:58:37.038692  429844 start.go:278] selected driver: kvm2
	I0813 20:58:37.038699  429844 start.go:751] validating driver "kvm2" against <nil>
	I0813 20:58:37.038719  429844 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:58:37.039903  429844 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:58:37.040054  429844 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:58:37.053876  429844 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:58:37.053933  429844 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:58:37.054106  429844 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 20:58:37.054129  429844 cni.go:93] Creating CNI manager for ""
	I0813 20:58:37.054137  429844 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:58:37.054146  429844 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 20:58:37.054157  429844 start_flags.go:277] config:
	{Name:force-systemd-env-20210813205836-393438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210813205836-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:58:37.054268  429844 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:58:37.056320  429844 out.go:177] * Starting control plane node force-systemd-env-20210813205836-393438 in cluster force-systemd-env-20210813205836-393438
	I0813 20:58:37.056345  429844 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:58:37.056422  429844 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:58:37.056447  429844 cache.go:56] Caching tarball of preloaded images
	I0813 20:58:37.056610  429844 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:58:37.056638  429844 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:58:37.056768  429844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/config.json ...
	I0813 20:58:37.056798  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/config.json: {Name:mk2424ff0b393a5833d75487ec48825d62c046b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:37.056936  429844 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:58:37.056960  429844 start.go:313] acquiring machines lock for force-systemd-env-20210813205836-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:58:37.057007  429844 start.go:317] acquired machines lock for "force-systemd-env-20210813205836-393438" in 32.17µs
	I0813 20:58:37.057028  429844 start.go:89] Provisioning new machine with config: &{Name:force-systemd-env-20210813205836-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210813205836-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:58:37.057090  429844 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 20:58:40.245466  429197 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.1.0: (9.240993229s)
	I0813 20:58:40.245499  429197 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 from cache
	I0813 20:58:40.245526  429197 containerd.go:280] Loading image: /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0813 20:58:40.245576  429197 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0813 20:58:37.059284  429844 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 20:58:37.059435  429844 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:37.059476  429844 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:37.071815  429844 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44353
	I0813 20:58:37.072267  429844 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:37.072895  429844 main.go:130] libmachine: Using API Version  1
	I0813 20:58:37.072918  429844 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:37.073296  429844 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:37.073509  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .GetMachineName
	I0813 20:58:37.073656  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .DriverName
	I0813 20:58:37.073822  429844 start.go:160] libmachine.API.Create for "force-systemd-env-20210813205836-393438" (driver="kvm2")
	I0813 20:58:37.073856  429844 client.go:168] LocalClient.Create starting
	I0813 20:58:37.073888  429844 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:58:37.073915  429844 main.go:130] libmachine: Decoding PEM data...
	I0813 20:58:37.073935  429844 main.go:130] libmachine: Parsing certificate...
	I0813 20:58:37.074104  429844 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:58:37.074133  429844 main.go:130] libmachine: Decoding PEM data...
	I0813 20:58:37.074148  429844 main.go:130] libmachine: Parsing certificate...
	I0813 20:58:37.074201  429844 main.go:130] libmachine: Running pre-create checks...
	I0813 20:58:37.074216  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .PreCreateCheck
	I0813 20:58:37.074534  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .GetConfigRaw
	I0813 20:58:37.075075  429844 main.go:130] libmachine: Creating machine...
	I0813 20:58:37.075104  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .Create
	I0813 20:58:37.075246  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Creating KVM machine...
	I0813 20:58:37.078160  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | found existing default KVM network
	I0813 20:58:37.080000  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.079830  429868 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1d:ff:3c}}
	I0813 20:58:37.080866  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.080772  429868 network.go:240] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fc:46:2e}}
	I0813 20:58:37.081886  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.081803  429868 network.go:240] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ee:b4:07}}
	I0813 20:58:37.083003  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.082914  429868 network.go:240] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:93:d0:b0}}
	I0813 20:58:37.085530  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.085420  429868 network.go:288] reserving subnet 192.168.83.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.83.0:0xc0000be0a8] misses:0}
	I0813 20:58:37.085577  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.085460  429868 network.go:235] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:58:37.113053  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | trying to create private KVM network mk-force-systemd-env-20210813205836-393438 192.168.83.0/24...
	I0813 20:58:37.387280  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | private KVM network mk-force-systemd-env-20210813205836-393438 192.168.83.0/24 created
	I0813 20:58:37.387319  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438 ...
	I0813 20:58:37.387343  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.387269  429868 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:58:37.387370  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 20:58:37.394836  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 20:58:37.595803  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.595655  429868 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438/id_rsa...
	I0813 20:58:37.683222  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.683102  429868 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438/force-systemd-env-20210813205836-393438.rawdisk...
	I0813 20:58:37.683254  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Writing magic tar header
	I0813 20:58:37.683270  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Writing SSH key tar header
	I0813 20:58:37.683287  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.683226  429868 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438 ...
	I0813 20:58:37.683364  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438
	I0813 20:58:37.683421  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438 (perms=drwx------)
	I0813 20:58:37.683459  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 20:58:37.683476  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 20:58:37.683518  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:58:37.683542  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337
	I0813 20:58:37.683563  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 20:58:37.683583  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 20:58:37.683607  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 20:58:37.683631  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 20:58:37.683641  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Creating domain...
	I0813 20:58:37.683658  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 20:58:37.683668  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins
	I0813 20:58:37.683680  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home
	I0813 20:58:37.683693  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Skipping /home - not owner
	I0813 20:58:37.705670  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:f8:d3:f1 in network default
	I0813 20:58:37.706154  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Ensuring networks are active...
	I0813 20:58:37.706184  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:37.708124  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Ensuring network default is active
	I0813 20:58:37.708535  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Ensuring network mk-force-systemd-env-20210813205836-393438 is active
	I0813 20:58:37.709174  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Getting domain xml...
	I0813 20:58:37.711001  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Creating domain...
	I0813 20:58:38.127750  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Waiting to get IP...
	I0813 20:58:38.128819  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.129396  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.129460  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:38.129376  429868 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 20:58:38.393943  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.394483  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.394517  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:38.394426  429868 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 20:58:38.777141  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.777637  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.777673  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:38.777573  429868 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 20:58:39.202079  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:39.202628  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:39.202689  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:39.202573  429868 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 20:58:39.677262  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:39.677759  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:39.677907  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:39.677815  429868 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 20:58:40.266574  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:40.267149  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:40.267187  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:40.267073  429868 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 20:58:41.102502  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:41.103078  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:41.103108  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:41.103009  429868 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 20:58:41.851012  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:41.851569  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:41.851607  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:41.851526  429868 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	33fae69af6bcf       6e38f40d628db       17 seconds ago       Exited              storage-provisioner       0                   76aee79f917be
	b6372d9d76486       296a6d5035e2d       32 seconds ago       Running             coredns                   1                   cfc4c8785e479
	afabb5f130410       0369cf4303ffd       33 seconds ago       Running             etcd                      1                   3f41ec729ef71
	57f3f32f280d8       bc2bb319a7038       33 seconds ago       Running             kube-controller-manager   1                   ce1823a3db17a
	1053b5b4ba3ab       3d174f00aa39e       33 seconds ago       Running             kube-apiserver            1                   a655f217cf1c5
	0d1a942c8b8c2       adb2816ea823a       34 seconds ago       Running             kube-proxy                1                   47e050012dbca
	1d84b053549cf       6be0dc1302e30       34 seconds ago       Running             kube-scheduler            1                   53f314c6cf963
	1bba0d6deb033       adb2816ea823a       About a minute ago   Exited              kube-proxy                0                   3f6f239c2851f
	63c0cc1fc4c0c       296a6d5035e2d       About a minute ago   Exited              coredns                   0                   b1f1f31f28005
	698bbea7ce6e9       6be0dc1302e30       About a minute ago   Exited              kube-scheduler            0                   5a66336a35add
	df02c38abac90       0369cf4303ffd       About a minute ago   Exited              etcd                      0                   4cf745987f602
	68bad43283064       bc2bb319a7038       About a minute ago   Exited              kube-controller-manager   0                   5340b4aa5ca39
	11c2753c9a8a7       3d174f00aa39e       About a minute ago   Exited              kube-apiserver            0                   304b611d719ea
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:55:52 UTC, end at Fri 2021-08-13 20:58:44 UTC. --
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.078142311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-pause-20210813205520-393438,Uid:86a000e5c08d32d80b2fd4e89cd34dd1,Namespace:kube-system,Attempt:1,} returns sandbox id \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.145266794Z" level=info msg="CreateContainer within sandbox \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\" for container &ContainerMetadata{Name:etcd,Attempt:1,}"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.321521915Z" level=info msg="StartContainer for \"1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c\" returns successfully"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.349622186Z" level=info msg="CreateContainer within sandbox \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\" for &ContainerMetadata{Name:etcd,Attempt:1,} returns container id \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.353268082Z" level=info msg="StartContainer for \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.376810925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-jzmnb,Uid:ea00ae4c-f4d9-414c-8762-6314a96c8a06,Namespace:kube-system,Attempt:1,} returns sandbox id \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.451595226Z" level=info msg="CreateContainer within sandbox \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.633919582Z" level=info msg="CreateContainer within sandbox \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.635324605Z" level=info msg="StartContainer for \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.770314446Z" level=info msg="StartContainer for \"57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849\" returns successfully"
	Aug 13 20:58:12 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:12.016041628Z" level=info msg="StartContainer for \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\" returns successfully"
	Aug 13 20:58:12 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:12.229109322Z" level=info msg="StartContainer for \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\" returns successfully"
	Aug 13 20:58:15 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:15.472167045Z" level=info msg="StartContainer for \"0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5\" returns successfully"
	Aug 13 20:58:25 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:25.856093567Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:99920d7c-bb8d-4c65-bf44-b56f23a40e53,Namespace:kube-system,Attempt:0,}"
	Aug 13 20:58:25 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:25.901091488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a pid=4886
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.481756294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:99920d7c-bb8d-4c65-bf44-b56f23a40e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.492027606Z" level=info msg="CreateContainer within sandbox \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.607213854Z" level=info msg="CreateContainer within sandbox \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.614295374Z" level=info msg="StartContainer for \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.876068804Z" level=info msg="StartContainer for \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\" returns successfully"
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.156236073Z" level=info msg="Finish piping stderr of container \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.158102368Z" level=info msg="Finish piping stdout of container \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.159567062Z" level=info msg="TaskExit event &TaskExit{ContainerID:33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81,ID:33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81,Pid:4945,ExitStatus:255,ExitedAt:2021-08-13 20:58:41.157732657 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.217770540Z" level=info msg="shim disconnected" id=33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.217941244Z" level=error msg="copy shim log" error="read /proc/self/fd/98: file already closed"
	
	* 
	* ==> coredns [63c0cc1fc4c0cb78fac8fe29e80eed8b43fa6762ce189d85564911aed6114ba0] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 6b95276539722f40f4545af91578505c
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0813 20:57:49.980199       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.978) (total time: 30001ms):
	Trace[2019727887]: [30.001847928s] [30.001847928s] END
	E0813 20:57:49.980279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:57:49.980655       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.975) (total time: 30005ms):
	Trace[939984059]: [30.00501838s] [30.00501838s] END
	E0813 20:57:49.980691       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:57:49.981307       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.975) (total time: 30005ms):
	Trace[911902081]: [30.005916603s] [30.005916603s] END
	E0813 20:57:49.981521       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 6b95276539722f40f4545af91578505c
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	E0813 20:58:20.310855       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	*                 "trace_clock=local"
	              on the kernel command line
	[  +0.000017] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.863604] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.032050] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.917916] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1722 comm=systemd-network
	[  +2.669268] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.335717] vboxguest: loading out-of-tree module taints kernel.
	[  +0.008488] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 20:56] systemd-fstab-generator[2101]: Ignoring "noauto" for root device
	[  +0.927578] systemd-fstab-generator[2132]: Ignoring "noauto" for root device
	[  +0.140064] systemd-fstab-generator[2146]: Ignoring "noauto" for root device
	[  +0.195734] systemd-fstab-generator[2179]: Ignoring "noauto" for root device
	[  +8.321149] systemd-fstab-generator[2386]: Ignoring "noauto" for root device
	[Aug13 20:57] systemd-fstab-generator[2823]: Ignoring "noauto" for root device
	[ +16.072552] kauditd_printk_skb: 38 callbacks suppressed
	[ +34.372009] kauditd_printk_skb: 116 callbacks suppressed
	[  +3.958113] NFSD: Unable to end grace period: -110
	[Aug13 20:58] systemd-fstab-generator[3706]: Ignoring "noauto" for root device
	[  +0.206181] systemd-fstab-generator[3719]: Ignoring "noauto" for root device
	[  +0.261980] systemd-fstab-generator[3744]: Ignoring "noauto" for root device
	[ +19.584639] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.482860] systemd-fstab-generator[4981]: Ignoring "noauto" for root device
	[  +0.846439] systemd-fstab-generator[5035]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5] <==
	* 2021-08-13 20:58:16.461857 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:5" took too long (198.960862ms) to execute
	2021-08-13 20:58:16.462013 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 " with result "range_response_count:0 size:5" took too long (199.025411ms) to execute
	2021-08-13 20:58:16.462116 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (190.42222ms) to execute
	2021-08-13 20:58:16.462337 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (179.184455ms) to execute
	2021-08-13 20:58:16.462702 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (172.711746ms) to execute
	2021-08-13 20:58:16.462925 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (170.528555ms) to execute
	2021-08-13 20:58:16.463221 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (158.293847ms) to execute
	2021-08-13 20:58:16.463747 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (158.490371ms) to execute
	2021-08-13 20:58:16.464124 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 " with result "range_response_count:0 size:5" took too long (152.464331ms) to execute
	2021-08-13 20:58:16.477058 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (151.343452ms) to execute
	2021-08-13 20:58:16.478005 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true " with result "range_response_count:0 size:5" took too long (142.028022ms) to execute
	2021-08-13 20:58:16.478939 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" limit:10000 " with result "range_response_count:0 size:5" took too long (142.259692ms) to execute
	2021-08-13 20:58:16.479721 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 " with result "range_response_count:0 size:5" took too long (129.328346ms) to execute
	2021-08-13 20:58:16.479967 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (126.882803ms) to execute
	2021-08-13 20:58:16.480303 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 " with result "range_response_count:11 size:5977" took too long (116.866258ms) to execute
	2021-08-13 20:58:16.480852 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true " with result "range_response_count:0 size:7" took too long (116.970061ms) to execute
	2021-08-13 20:58:23.354247 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/cluster-admin\" " with result "range_response_count:1 size:718" took too long (1.914180768s) to execute
	2021-08-13 20:58:23.356685 W | etcdserver: request "header:<ID:14244176716868856811 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-20210813205520-393438.169af9452389bd61\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-20210813205520-393438.169af9452389bd61\" value_size:717 lease:5020804680014080881 >> failure:<>>" with result "size:16" took too long (1.23562281s) to execute
	2021-08-13 20:58:23.370142 W | wal: sync duration of 1.250273887s, expected less than 1s
	2021-08-13 20:58:23.370676 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.152835664s) to execute
	2021-08-13 20:58:23.371565 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.728436243s) to execute
	2021-08-13 20:58:23.371769 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.847351028s) to execute
	2021-08-13 20:58:23.378962 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-jzmnb\" " with result "range_response_count:1 size:4862" took too long (671.753147ms) to execute
	2021-08-13 20:58:24.705568 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813205520-393438\" " with result "range_response_count:1 size:4394" took too long (221.501911ms) to execute
	2021-08-13 20:58:26.341296 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [df02c38abac90e1bfb1eaa8433ba9faac330d654e786d0c41901507b55d0c418] <==
	* 2021-08-13 20:56:51.867973 I | embed: serving client requests on 192.168.61.151:2379
	2021-08-13 20:56:51.875825 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:57:01.271062 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" " with result "range_response_count:0 size:5" took too long (480.2351ms) to execute
	2021-08-13 20:57:01.272131 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813205520-393438\" " with result "range_response_count:1 size:3758" took too long (875.676682ms) to execute
	2021-08-13 20:57:01.273551 W | etcdserver: read-only range request "key:\"/registry/events/default/pause-20210813205520-393438.169af930771f12dc\" " with result "range_response_count:1 size:735" took too long (792.283833ms) to execute
	2021-08-13 20:57:02.171621 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (872.818648ms) to execute
	2021-08-13 20:57:02.172160 W | etcdserver: request "header:<ID:14244176716848216677 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813205520-393438\" mod_revision:222 > success:<request_put:<key:\"/registry/minions/pause-20210813205520-393438\" value_size:3993 >> failure:<request_range:<key:\"/registry/minions/pause-20210813205520-393438\" > >>" with result "size:16" took too long (128.660032ms) to execute
	2021-08-13 20:57:02.172330 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:351" took too long (871.615956ms) to execute
	2021-08-13 20:57:02.172733 W | etcdserver: read-only range request "key:\"/registry/events/default/pause-20210813205520-393438.169af930771f2f58\" " with result "range_response_count:1 size:733" took too long (859.92991ms) to execute
	2021-08-13 20:57:02.172849 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813205520-393438\" " with result "range_response_count:0 size:5" took too long (853.236151ms) to execute
	2021-08-13 20:57:09.290631 W | etcdserver: request "header:<ID:14244176716848216792 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813205520-393438\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/pause-20210813205520-393438\" value_size:3277 >> failure:<>>" with result "size:5" took too long (472.704737ms) to execute
	2021-08-13 20:57:09.291659 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/pause-20210813205520-393438\" " with result "range_response_count:0 size:5" took too long (897.879132ms) to execute
	2021-08-13 20:57:09.298807 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210813205520-393438\" " with result "range_response_count:1 size:4986" took too long (528.421007ms) to execute
	2021-08-13 20:57:09.299124 W | etcdserver: read-only range request "key:\"/registry/csinodes/pause-20210813205520-393438\" " with result "range_response_count:1 size:656" took too long (894.254864ms) to execute
	2021-08-13 20:57:13.314052 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:210" took too long (127.466898ms) to execute
	2021-08-13 20:57:13.314663 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (132.387511ms) to execute
	2021-08-13 20:57:16.343764 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:20.988739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:30.989151 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:39.442816 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:422" took too long (120.094417ms) to execute
	2021-08-13 20:57:40.988900 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:50.989064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:58:00.244154 W | etcdserver: request "header:<ID:14244176716848217456 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.151\" mod_revision:483 > success:<request_put:<key:\"/registry/masterleases/192.168.61.151\" value_size:69 lease:5020804679993441646 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.151\" > >>" with result "size:16" took too long (162.220853ms) to execute
	2021-08-13 20:58:00.245134 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (881.389444ms) to execute
	2021-08-13 20:58:00.989778 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:58:54 up 3 min,  0 users,  load average: 1.49, 0.89, 0.36
	Linux pause-20210813205520-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c] <==
	* I0813 20:58:20.351321       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 20:58:20.372737       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0813 20:58:20.375890       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 20:58:20.387225       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 20:58:20.401103       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:58:20.403283       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 20:58:20.407207       1 cache.go:39] Caches are synced for autoregister controller
	I0813 20:58:20.410957       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 20:58:21.065658       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 20:58:21.066635       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 20:58:21.090819       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 20:58:23.358425       1 trace.go:205] Trace[1442514083]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:58:21.628) (total time: 1729ms):
	Trace[1442514083]: ---"Object stored in database" 1729ms (20:58:00.358)
	Trace[1442514083]: [1.729557914s] [1.729557914s] END
	I0813 20:58:23.359893       1 trace.go:205] Trace[553017594]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:58:21.438) (total time: 1920ms):
	Trace[553017594]: ---"About to write a response" 1919ms (20:58:00.358)
	Trace[553017594]: [1.920866407s] [1.920866407s] END
	I0813 20:58:23.381663       1 trace.go:205] Trace[1143050190]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-jzmnb,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:58:22.699) (total time: 682ms):
	Trace[1143050190]: ---"About to write a response" 681ms (20:58:00.380)
	Trace[1143050190]: [682.310081ms] [682.310081ms] END
	I0813 20:58:25.230359       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:58:25.281700       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:58:25.373725       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 20:58:25.413105       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 20:58:25.560667       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-apiserver [11c2753c9a8a79ebfb2fe156a698be51aed9e9d6ac5dfc0af27d0a4822c7d016] <==
	* I0813 20:57:09.309542       1 trace.go:205] Trace[2046907584]: "Create" url:/api/v1/nodes,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.501) (total time: 806ms):
	Trace[2046907584]: [806.482297ms] [806.482297ms] END
	I0813 20:57:09.310802       1 trace.go:205] Trace[146959614]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.771) (total time: 538ms):
	Trace[146959614]: ---"Object stored in database" 538ms (20:57:00.310)
	Trace[146959614]: [538.954794ms] [538.954794ms] END
	I0813 20:57:09.311138       1 trace.go:205] Trace[1128950750]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-20210813205520-393438,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 541ms):
	Trace[1128950750]: ---"About to write a response" 537ms (20:57:00.307)
	Trace[1128950750]: [541.267103ms] [541.267103ms] END
	I0813 20:57:09.311248       1 trace.go:205] Trace[1268223707]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 541ms):
	Trace[1268223707]: ---"Object stored in database" 540ms (20:57:00.310)
	Trace[1268223707]: [541.971563ms] [541.971563ms] END
	I0813 20:57:09.311433       1 trace.go:205] Trace[1977445463]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.772) (total time: 538ms):
	Trace[1977445463]: ---"Object stored in database" 537ms (20:57:00.310)
	Trace[1977445463]: [538.348208ms] [538.348208ms] END
	I0813 20:57:09.321803       1 trace.go:205] Trace[494614999]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 552ms):
	Trace[494614999]: [552.453895ms] [552.453895ms] END
	I0813 20:57:09.345220       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:57:16.259955       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:57:16.380865       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:57:37.272234       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:57:37.272418       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:57:37.272507       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:58:00.246413       1 trace.go:205] Trace[1997979141]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (13-Aug-2021 20:57:59.258) (total time: 987ms):
	Trace[1997979141]: ---"Transaction committed" 984ms (20:58:00.246)
	Trace[1997979141]: [987.521712ms] [987.521712ms] END
	
	* 
	* ==> kube-controller-manager [57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849] <==
	* I0813 20:58:25.074041       1 daemon_controller.go:285] Starting daemon sets controller
	I0813 20:58:25.074050       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I0813 20:58:25.116517       1 controllermanager.go:574] Started "horizontalpodautoscaling"
	I0813 20:58:25.116556       1 horizontal.go:169] Starting HPA controller
	I0813 20:58:25.116758       1 shared_informer.go:240] Waiting for caches to sync for HPA
	E0813 20:58:25.120839       1 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	W0813 20:58:25.120857       1 controllermanager.go:566] Skipping "service"
	I0813 20:58:25.124370       1 controllermanager.go:574] Started "persistentvolume-expander"
	I0813 20:58:25.124569       1 expand_controller.go:327] Starting expand controller
	I0813 20:58:25.124579       1 shared_informer.go:240] Waiting for caches to sync for expand
	I0813 20:58:25.175876       1 controllermanager.go:574] Started "namespace"
	I0813 20:58:25.176251       1 namespace_controller.go:200] Starting namespace controller
	I0813 20:58:25.176376       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0813 20:58:25.185657       1 controllermanager.go:574] Started "serviceaccount"
	I0813 20:58:25.187325       1 serviceaccounts_controller.go:117] Starting service account controller
	I0813 20:58:25.187340       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0813 20:58:25.192151       1 controllermanager.go:574] Started "replicaset"
	I0813 20:58:25.192315       1 replica_set.go:182] Starting replicaset controller
	I0813 20:58:25.192327       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0813 20:58:25.200141       1 controllermanager.go:574] Started "bootstrapsigner"
	I0813 20:58:25.200611       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0813 20:58:25.204061       1 controllermanager.go:574] Started "cronjob"
	I0813 20:58:25.204578       1 cronjob_controllerv2.go:125] Starting cronjob controller v2
	I0813 20:58:25.204590       1 shared_informer.go:240] Waiting for caches to sync for cronjob
	I0813 20:58:25.207401       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [68bad432830642a2624a04015efd233270944ea918f0f82217367834481cc3a8] <==
	* I0813 20:57:15.593972       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:57:15.593991       1 disruption.go:371] Sending events to api server.
	I0813 20:57:15.596695       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:57:15.636700       1 shared_informer.go:247] Caches are synced for service account 
	I0813 20:57:15.652896       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:57:15.701400       1 shared_informer.go:247] Caches are synced for taint 
	I0813 20:57:15.701628       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0813 20:57:15.701702       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210813205520-393438. Assuming now as a timestamp.
	I0813 20:57:15.701748       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0813 20:57:15.701825       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0813 20:57:15.702024       1 event.go:291] "Event occurred" object="pause-20210813205520-393438" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210813205520-393438 event: Registered Node pause-20210813205520-393438 in Controller"
	I0813 20:57:15.735577       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:57:15.751667       1 shared_informer.go:247] Caches are synced for stateful set 
	I0813 20:57:15.767285       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:57:15.796364       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0813 20:57:15.847876       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:57:16.199991       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:57:16.200121       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:57:16.224599       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:57:16.277997       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:57:16.457337       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mlf5c"
	I0813 20:57:16.545672       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-fhxw7"
	I0813 20:57:16.596799       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-jzmnb"
	I0813 20:57:16.804186       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:57:16.819742       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-fhxw7"
	
	* 
	* ==> kube-proxy [0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5] <==
	* E0813 20:58:20.334846       1 node.go:161] Failed to retrieve node info: nodes "pause-20210813205520-393438" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	I0813 20:58:21.364522       1 node.go:172] Successfully retrieved node IP: 192.168.61.151
	I0813 20:58:21.365223       1 server_others.go:140] Detected node IP 192.168.61.151
	W0813 20:58:21.366125       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:58:23.461362       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:58:23.462248       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:58:23.465333       1 server_others.go:212] Using iptables Proxier.
	I0813 20:58:23.483125       1 server.go:643] Version: v1.21.3
	I0813 20:58:23.488959       1 config.go:315] Starting service config controller
	I0813 20:58:23.490323       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:58:23.490593       1 config.go:224] Starting endpoint slice config controller
	I0813 20:58:23.490606       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:58:23.512424       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:58:23.514744       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:58:23.591163       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:58:23.593313       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [1bba0d6deb03392a9c2a729aa9c03a18c3e1586cd458a1f081392f4b04d0ae62] <==
	* I0813 20:57:20.123665       1 node.go:172] Successfully retrieved node IP: 192.168.61.151
	I0813 20:57:20.123841       1 server_others.go:140] Detected node IP 192.168.61.151
	W0813 20:57:20.123909       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:57:20.180054       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:57:20.180158       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:57:20.180173       1 server_others.go:212] Using iptables Proxier.
	I0813 20:57:20.181825       1 server.go:643] Version: v1.21.3
	I0813 20:57:20.184367       1 config.go:315] Starting service config controller
	I0813 20:57:20.184561       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:57:20.184600       1 config.go:224] Starting endpoint slice config controller
	I0813 20:57:20.184604       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:57:20.203222       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:57:20.207174       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:57:20.285130       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:57:20.285144       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c] <==
	* I0813 20:58:11.830530       1 serving.go:347] Generated self-signed cert in-memory
	W0813 20:58:20.220887       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 20:58:20.224373       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 20:58:20.224624       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:58:20.224640       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:58:20.341243       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 20:58:20.343223       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:58:20.343608       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:58:20.347257       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0813 20:58:20.444874       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [698bbea7ce6e9ce2ff33d763621c6d0ae027c7205d816ea431cafc6e045b6889] <==
	* I0813 20:56:57.340096       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:56:57.373873       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:56:57.375600       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:56:57.398047       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:56:57.406392       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:56:57.418940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.424521       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:56:57.426539       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:56:57.426578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:56:57.428616       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428717       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428765       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:56:57.428811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:56:57.428897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:56:58.261670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:56:58.311937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:56:58.405804       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.463800       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:56:58.585826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.615525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:56:58.626736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.669986       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:56:58.791820       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0813 20:57:01.440271       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:55:52 UTC, end at Fri 2021-08-13 20:58:55 UTC. --
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.453957    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454194    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454410    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454727    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454757    2832 kubelet_node_status.go:457] "Unable to update node status" err="update node status exceeds retry count"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.611414    2832 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-pause-20210813205520-393438.169af943ec02b0a4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-pause-20210813205520-393438", UID:"36ca0d21ef43020c8f018e62049ff15f", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readines
s probe failed: Get \"https://192.168.61.151:8443/readyz\": dial tcp 192.168.61.151:8443: connect: connection refused", Source:v1.EventSource{Component:"kubelet", Host:"pause-20210813205520-393438"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03dd51755ca4ea4, ext:62717519917, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03dd51755ca4ea4, ext:62717519917, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.61.151:8443: connect: connection refused'(may retry after sleeping)
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.428873    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.429203    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.429890    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.430165    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.430396    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:10.430620    2832 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.430883    2832 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.632976    2832 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:11 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:11.038724    2832 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:11 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:11.294567    2832 status_manager.go:566] "Failed to get status for pod" podUID=469cea0375ae276925a50e4dde7e4ace pod="kube-system/kube-scheduler-pause-20210813205520-393438" error="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-20210813205520-393438\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:20 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:20.266431    2832 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Aug 13 20:58:20 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:20.269986    2832 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Aug 13 20:58:25 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:25.541317    2832 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:58:25 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:25.590904    2832 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw6vd\" (UniqueName: \"kubernetes.io/projected/99920d7c-bb8d-4c65-bf44-b56f23a40e53-kube-api-access-xw6vd\") pod \"storage-provisioner\" (UID: \"99920d7c-bb8d-4c65-bf44-b56f23a40e53\") "
	Aug 13 20:58:25 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:25.590979    2832 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/99920d7c-bb8d-4c65-bf44-b56f23a40e53-tmp\") pod \"storage-provisioner\" (UID: \"99920d7c-bb8d-4c65-bf44-b56f23a40e53\") "
	Aug 13 20:58:29 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:29.362225    2832 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:58:29 pause-20210813205520-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:58:29 pause-20210813205520-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:58:29 pause-20210813205520-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 90 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000328290, 0xc000000003)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000328280)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003f0480, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bcc80, 0x18e5530, 0xc0003284c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0004ceee0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004ceee0, 0x18b3d60, 0xc000311f80, 0x1, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004ceee0, 0x3b9aca00, 0x0, 0x17a0501, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0004ceee0, 0x3b9aca00, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:58:54.644106  430035 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/Pause (26.42s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (19.09s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210813205520-393438 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210813205520-393438 --output=json --layout=cluster: exit status 2 (299.193802ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210813205520-393438","StatusCode":101,"StatusName":"Pausing","Step":"Pausing","StepDetail":"* Pausing node pause-20210813205520-393438 ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210813205520-393438","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:58:55.535833  430187 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0813 20:58:55.535871  430187 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0813 20:58:55.535899  430187 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax

                                                
                                                
** /stderr **
pause_test.go:190: incorrect status code: 101
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813205520-393438 -n pause-20210813205520-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813205520-393438 -n pause-20210813205520-393438: exit status 2 (280.883469ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813205520-393438 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210813205520-393438 logs -n 25: exit status 110 (18.461132777s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210813202658-393438          | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:30:34 UTC | Fri, 13 Aug 2021 20:31:44 UTC |
	|         | node start m03                           |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	| stop    | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:31:45 UTC | Fri, 13 Aug 2021 20:34:51 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	| start   | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:34:51 UTC | Fri, 13 Aug 2021 20:40:57 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	|         | --wait=true -v=8                         |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	| -p      | multinode-20210813202658-393438          | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:58 UTC | Fri, 13 Aug 2021 20:40:59 UTC |
	|         | node delete m03                          |                                          |         |         |                               |                               |
	| -p      | multinode-20210813202658-393438          | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:00 UTC | Fri, 13 Aug 2021 20:44:04 UTC |
	|         | stop                                     |                                          |         |         |                               |                               |
	| start   | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:04 UTC | Fri, 13 Aug 2021 20:48:01 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	|         | --wait=true -v=8                         |                                          |         |         |                               |                               |
	|         | --alsologtostderr --driver=kvm2          |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| start   | -p                                       | multinode-20210813202658-393438-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:01 UTC | Fri, 13 Aug 2021 20:49:01 UTC |
	|         | multinode-20210813202658-393438-m03      |                                          |         |         |                               |                               |
	|         | --driver=kvm2                            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | multinode-20210813202658-393438-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:02 UTC | Fri, 13 Aug 2021 20:49:03 UTC |
	|         | multinode-20210813202658-393438-m03      |                                          |         |         |                               |                               |
	| delete  | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:03 UTC | Fri, 13 Aug 2021 20:49:05 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:38 UTC | Fri, 13 Aug 2021 20:52:46 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | --wait=true --preload=false              |                                          |         |         |                               |                               |
	|         | --driver=kvm2                            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:47 UTC | Fri, 13 Aug 2021 20:52:48 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | -- sudo crictl pull busybox              |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:48 UTC | Fri, 13 Aug 2021 20:53:39 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2           |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:39 UTC | Fri, 13 Aug 2021 20:53:39 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | -- sudo crictl image ls                  |                                          |         |         |                               |                               |
	| delete  | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:39 UTC | Fri, 13 Aug 2021 20:53:41 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:41 UTC | Fri, 13 Aug 2021 20:54:41 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:42 UTC | Fri, 13 Aug 2021 20:54:42 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --cancel-scheduled                       |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:55 UTC | Fri, 13 Aug 2021 20:55:02 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --schedule 5s                            |                                          |         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:20 UTC | Fri, 13 Aug 2021 20:55:20 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-containerd-20210813205520-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:21 UTC | Fri, 13 Aug 2021 20:57:33 UTC |
	|         | offline-containerd-20210813205520-393438 |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=kvm2                |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20210813205520-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:33 UTC | Fri, 13 Aug 2021 20:57:35 UTC |
	|         | offline-containerd-20210813205520-393438 |                                          |         |         |                               |                               |
	| start   | -p pause-20210813205520-393438           | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:21 UTC | Fri, 13 Aug 2021 20:57:54 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=kvm2                 |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p pause-20210813205520-393438           | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:54 UTC | Fri, 13 Aug 2021 20:58:28 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                       |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:27 UTC | Fri, 13 Aug 2021 20:58:34 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                       |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| logs    | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:34 UTC | Fri, 13 Aug 2021 20:58:35 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	| delete  | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:35 UTC | Fri, 13 Aug 2021 20:58:36 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:58:36
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:58:36.952203  429844 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:58:36.952268  429844 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:58:36.952271  429844 out.go:311] Setting ErrFile to fd 2...
	I0813 20:58:36.952274  429844 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:58:36.952377  429844 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:58:36.952620  429844 out.go:305] Setting JSON to false
	I0813 20:58:36.993458  429844 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6079,"bootTime":1628882238,"procs":193,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:58:36.993602  429844 start.go:121] virtualization: kvm guest
	I0813 20:58:36.996289  429844 out.go:177] * [force-systemd-env-20210813205836-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:58:36.998267  429844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:58:36.996433  429844 notify.go:169] Checking for updates...
	I0813 20:58:36.999665  429844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:58:37.001250  429844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:58:37.002637  429844 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:58:37.004247  429844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0813 20:58:37.004726  429844 config.go:177] Loaded profile config "kubernetes-upgrade-20210813205735-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:58:37.004843  429844 config.go:177] Loaded profile config "pause-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:58:37.004926  429844 config.go:177] Loaded profile config "running-upgrade-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:58:37.004973  429844 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:58:37.038645  429844 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 20:58:37.038692  429844 start.go:278] selected driver: kvm2
	I0813 20:58:37.038699  429844 start.go:751] validating driver "kvm2" against <nil>
	I0813 20:58:37.038719  429844 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:58:37.039903  429844 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:58:37.040054  429844 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:58:37.053876  429844 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:58:37.053933  429844 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:58:37.054106  429844 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 20:58:37.054129  429844 cni.go:93] Creating CNI manager for ""
	I0813 20:58:37.054137  429844 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:58:37.054146  429844 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 20:58:37.054157  429844 start_flags.go:277] config:
	{Name:force-systemd-env-20210813205836-393438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210813205836-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:58:37.054268  429844 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:58:37.056320  429844 out.go:177] * Starting control plane node force-systemd-env-20210813205836-393438 in cluster force-systemd-env-20210813205836-393438
	I0813 20:58:37.056345  429844 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:58:37.056422  429844 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:58:37.056447  429844 cache.go:56] Caching tarball of preloaded images
	I0813 20:58:37.056610  429844 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:58:37.056638  429844 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 20:58:37.056768  429844 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/config.json ...
	I0813 20:58:37.056798  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/config.json: {Name:mk2424ff0b393a5833d75487ec48825d62c046b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:37.056936  429844 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:58:37.056960  429844 start.go:313] acquiring machines lock for force-systemd-env-20210813205836-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:58:37.057007  429844 start.go:317] acquired machines lock for "force-systemd-env-20210813205836-393438" in 32.17µs
	I0813 20:58:37.057028  429844 start.go:89] Provisioning new machine with config: &{Name:force-systemd-env-20210813205836-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210813205836-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 20:58:37.057090  429844 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 20:58:40.245466  429197 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.1.0: (9.240993229s)
	I0813 20:58:40.245499  429197 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 from cache
	I0813 20:58:40.245526  429197 containerd.go:280] Loading image: /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0813 20:58:40.245576  429197 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0813 20:58:37.059284  429844 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 20:58:37.059435  429844 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:37.059476  429844 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:37.071815  429844 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44353
	I0813 20:58:37.072267  429844 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:37.072895  429844 main.go:130] libmachine: Using API Version  1
	I0813 20:58:37.072918  429844 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:37.073296  429844 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:37.073509  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .GetMachineName
	I0813 20:58:37.073656  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .DriverName
	I0813 20:58:37.073822  429844 start.go:160] libmachine.API.Create for "force-systemd-env-20210813205836-393438" (driver="kvm2")
	I0813 20:58:37.073856  429844 client.go:168] LocalClient.Create starting
	I0813 20:58:37.073888  429844 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 20:58:37.073915  429844 main.go:130] libmachine: Decoding PEM data...
	I0813 20:58:37.073935  429844 main.go:130] libmachine: Parsing certificate...
	I0813 20:58:37.074104  429844 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 20:58:37.074133  429844 main.go:130] libmachine: Decoding PEM data...
	I0813 20:58:37.074148  429844 main.go:130] libmachine: Parsing certificate...
	I0813 20:58:37.074201  429844 main.go:130] libmachine: Running pre-create checks...
	I0813 20:58:37.074216  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .PreCreateCheck
	I0813 20:58:37.074534  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .GetConfigRaw
	I0813 20:58:37.075075  429844 main.go:130] libmachine: Creating machine...
	I0813 20:58:37.075104  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .Create
	I0813 20:58:37.075246  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Creating KVM machine...
	I0813 20:58:37.078160  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | found existing default KVM network
	I0813 20:58:37.080000  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.079830  429868 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1d:ff:3c}}
	I0813 20:58:37.080866  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.080772  429868 network.go:240] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fc:46:2e}}
	I0813 20:58:37.081886  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.081803  429868 network.go:240] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ee:b4:07}}
	I0813 20:58:37.083003  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.082914  429868 network.go:240] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:93:d0:b0}}
	I0813 20:58:37.085530  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.085420  429868 network.go:288] reserving subnet 192.168.83.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.83.0:0xc0000be0a8] misses:0}
	I0813 20:58:37.085577  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.085460  429868 network.go:235] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 20:58:37.113053  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | trying to create private KVM network mk-force-systemd-env-20210813205836-393438 192.168.83.0/24...
	I0813 20:58:37.387280  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | private KVM network mk-force-systemd-env-20210813205836-393438 192.168.83.0/24 created
	I0813 20:58:37.387319  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438 ...
	I0813 20:58:37.387343  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.387269  429868 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:58:37.387370  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 20:58:37.394836  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 20:58:37.595803  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.595655  429868 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438/id_rsa...
	I0813 20:58:37.683222  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.683102  429868 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438/force-systemd-env-20210813205836-393438.rawdisk...
	I0813 20:58:37.683254  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Writing magic tar header
	I0813 20:58:37.683270  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Writing SSH key tar header
	I0813 20:58:37.683287  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:37.683226  429868 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438 ...
	I0813 20:58:37.683364  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438
	I0813 20:58:37.683421  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/force-systemd-env-20210813205836-393438 (perms=drwx------)
	I0813 20:58:37.683459  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 20:58:37.683476  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 20:58:37.683518  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:58:37.683542  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337
	I0813 20:58:37.683563  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 20:58:37.683583  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 20:58:37.683607  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 20:58:37.683631  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 20:58:37.683641  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Creating domain...
	I0813 20:58:37.683658  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 20:58:37.683668  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home/jenkins
	I0813 20:58:37.683680  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Checking permissions on dir: /home
	I0813 20:58:37.683693  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | Skipping /home - not owner
	I0813 20:58:37.705670  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:f8:d3:f1 in network default
	I0813 20:58:37.706154  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Ensuring networks are active...
	I0813 20:58:37.706184  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:37.708124  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Ensuring network default is active
	I0813 20:58:37.708535  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Ensuring network mk-force-systemd-env-20210813205836-393438 is active
	I0813 20:58:37.709174  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Getting domain xml...
	I0813 20:58:37.711001  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Creating domain...
	I0813 20:58:38.127750  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Waiting to get IP...
	I0813 20:58:38.128819  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.129396  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.129460  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:38.129376  429868 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 20:58:38.393943  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.394483  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.394517  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:38.394426  429868 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 20:58:38.777141  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.777637  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:38.777673  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:38.777573  429868 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 20:58:39.202079  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:39.202628  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:39.202689  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:39.202573  429868 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 20:58:39.677262  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:39.677759  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:39.677907  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:39.677815  429868 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 20:58:40.266574  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:40.267149  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:40.267187  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:40.267073  429868 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 20:58:41.102502  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:41.103078  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:41.103108  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:41.103009  429868 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 20:58:41.851012  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:41.851569  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:41.851607  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:41.851526  429868 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 20:58:43.969399  429159 out.go:204]   - Configuring RBAC rules ...
	I0813 20:58:44.522081  429159 cni.go:93] Creating CNI manager for ""
	I0813 20:58:44.522110  429159 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:58:44.523838  429159 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 20:58:44.523926  429159 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 20:58:44.537355  429159 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 20:58:44.558964  429159 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:58:44.559026  429159 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:58:44.559034  429159 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=kubernetes-upgrade-20210813205735-393438 minikube.k8s.io/updated_at=2021_08_13T20_58_44_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 20:58:44.575216  429159 ops.go:34] apiserver oom_adj: 16
	I0813 20:58:44.575231  429159 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 20:58:44.575244  429159 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:58:44.999176  429159 kubeadm.go:985] duration metric: took 440.206594ms to wait for elevateKubeSystemPrivileges.
	I0813 20:58:44.999256  429159 kubeadm.go:392] StartCluster complete in 19.130731142s
	I0813 20:58:44.999283  429159 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:44.999382  429159 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:58:45.001133  429159 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:45.002169  429159 kapi.go:59] client config for kubernetes-upgrade-20210813205735-393438: &rest.Config{Host:"https://192.168.39.75:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/pro
files/kubernetes-upgrade-20210813205735-393438/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:58:41.012327  429197 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 from cache
	I0813 20:58:41.012371  429197 containerd.go:280] Loading image: /var/lib/minikube/images/coredns_1.7.0
	I0813 20:58:41.012422  429197 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.7.0
	I0813 20:58:41.782348  429197 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 from cache
	I0813 20:58:41.782394  429197 containerd.go:280] Loading image: /var/lib/minikube/images/kube-apiserver_v1.20.0
	I0813 20:58:41.782464  429197 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.20.0
	I0813 20:58:43.014842  429197 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.20.0: (1.232349741s)
	I0813 20:58:43.014872  429197 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 from cache
	I0813 20:58:43.014894  429197 containerd.go:280] Loading image: /var/lib/minikube/images/etcd_3.4.13-0
	I0813 20:58:43.014943  429197 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.13-0
	I0813 20:58:45.560046  429159 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20210813205735-393438" rescaled to 1
	I0813 20:58:45.560108  429159 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0813 20:58:45.561738  429159 out.go:177] * Verifying Kubernetes components...
	I0813 20:58:45.561802  429159 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:58:45.560165  429159 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:58:45.560188  429159 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 20:58:45.560387  429159 config.go:177] Loaded profile config "kubernetes-upgrade-20210813205735-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:58:45.561900  429159 addons.go:59] Setting storage-provisioner=true in profile "kubernetes-upgrade-20210813205735-393438"
	I0813 20:58:45.561919  429159 addons.go:135] Setting addon storage-provisioner=true in "kubernetes-upgrade-20210813205735-393438"
	W0813 20:58:45.561930  429159 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:58:45.561948  429159 addons.go:59] Setting default-storageclass=true in profile "kubernetes-upgrade-20210813205735-393438"
	I0813 20:58:45.561962  429159 host.go:66] Checking if "kubernetes-upgrade-20210813205735-393438" exists ...
	I0813 20:58:45.561969  429159 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20210813205735-393438"
	I0813 20:58:45.562408  429159 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:45.562432  429159 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:45.562446  429159 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:45.562461  429159 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:45.584670  429159 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39223
	I0813 20:58:45.585317  429159 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:45.585947  429159 main.go:130] libmachine: Using API Version  1
	I0813 20:58:45.585964  429159 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:45.586411  429159 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:45.587050  429159 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:45.587096  429159 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:45.590121  429159 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42867
	I0813 20:58:45.590528  429159 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:45.591101  429159 main.go:130] libmachine: Using API Version  1
	I0813 20:58:45.591126  429159 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:45.591491  429159 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:45.591677  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetState
	I0813 20:58:45.596447  429159 kapi.go:59] client config for kubernetes-upgrade-20210813205735-393438: &rest.Config{Host:"https://192.168.39.75:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/pro
files/kubernetes-upgrade-20210813205735-393438/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:58:45.600671  429159 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39405
	I0813 20:58:45.602172  429159 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:45.602715  429159 main.go:130] libmachine: Using API Version  1
	I0813 20:58:45.602733  429159 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:45.603159  429159 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:45.603323  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetState
	I0813 20:58:45.606207  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:58:45.608267  429159 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:58:45.608388  429159 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:58:45.608404  429159 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:58:45.608425  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:58:45.611070  429159 addons.go:135] Setting addon default-storageclass=true in "kubernetes-upgrade-20210813205735-393438"
	W0813 20:58:45.611145  429159 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:58:45.611194  429159 host.go:66] Checking if "kubernetes-upgrade-20210813205735-393438" exists ...
	I0813 20:58:45.611662  429159 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:45.611737  429159 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:45.614890  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:45.615371  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:45.615408  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:45.615631  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:58:45.615788  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:45.615921  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:58:45.616048  429159 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:58:45.625461  429159 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40411
	I0813 20:58:45.625875  429159 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:45.626358  429159 main.go:130] libmachine: Using API Version  1
	I0813 20:58:45.626389  429159 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:45.626838  429159 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:45.627438  429159 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:58:45.627478  429159 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:58:45.641154  429159 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43917
	I0813 20:58:45.641622  429159 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:58:45.642128  429159 main.go:130] libmachine: Using API Version  1
	I0813 20:58:45.642149  429159 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:58:45.642551  429159 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:58:45.642750  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetState
	I0813 20:58:45.648187  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:58:45.650231  429159 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:58:45.650247  429159 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:58:45.650267  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:58:45.656254  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:45.656712  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:58:45.656739  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:58:45.656985  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:58:45.657137  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:58:45.657295  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:58:45.657419  429159 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:58:45.723001  429159 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:58:45.723863  429159 kapi.go:59] client config for kubernetes-upgrade-20210813205735-393438: &rest.Config{Host:"https://192.168.39.75:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/pro
files/kubernetes-upgrade-20210813205735-393438/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:58:45.725225  429159 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:58:45.725270  429159 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:58:45.741319  429159 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:58:45.755227  429159 api_server.go:70] duration metric: took 195.080998ms to wait for apiserver process to appear ...
	I0813 20:58:45.755251  429159 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:58:45.755262  429159 api_server.go:239] Checking apiserver healthz at https://192.168.39.75:8443/healthz ...
	I0813 20:58:45.772657  429159 api_server.go:265] https://192.168.39.75:8443/healthz returned 200:
	ok
	I0813 20:58:45.773887  429159 api_server.go:139] control plane version: v1.14.0
	I0813 20:58:45.773911  429159 api_server.go:129] duration metric: took 18.650191ms to wait for apiserver health ...
	I0813 20:58:45.773922  429159 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:58:45.786259  429159 system_pods.go:59] 0 kube-system pods found
	I0813 20:58:45.786290  429159 retry.go:31] will retry after 305.063636ms: only 0 pod(s) have shown up
	I0813 20:58:45.816348  429159 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:58:46.118512  429159 system_pods.go:59] 0 kube-system pods found
	I0813 20:58:46.118603  429159 retry.go:31] will retry after 338.212508ms: only 0 pod(s) have shown up
	I0813 20:58:46.460537  429159 system_pods.go:59] 0 kube-system pods found
	I0813 20:58:46.460576  429159 retry.go:31] will retry after 378.459802ms: only 0 pod(s) have shown up
	I0813 20:58:46.543644  429159 start.go:728] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 20:58:46.623305  429159 main.go:130] libmachine: Making call to close driver server
	I0813 20:58:46.623407  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .Close
	I0813 20:58:46.623499  429159 main.go:130] libmachine: Making call to close driver server
	I0813 20:58:46.623546  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .Close
	I0813 20:58:46.623843  429159 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:58:46.623857  429159 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:58:46.623874  429159 main.go:130] libmachine: Making call to close driver server
	I0813 20:58:46.623886  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .Close
	I0813 20:58:46.623979  429159 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:58:46.623994  429159 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:58:46.624004  429159 main.go:130] libmachine: Making call to close driver server
	I0813 20:58:46.624020  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .Close
	I0813 20:58:46.624132  429159 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:58:46.624152  429159 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:58:46.624163  429159 main.go:130] libmachine: Making call to close driver server
	I0813 20:58:46.624172  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .Close
	I0813 20:58:46.624238  429159 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | Closing plugin on server side
	I0813 20:58:46.624325  429159 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:58:46.624348  429159 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:58:46.626276  429159 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:58:46.626301  429159 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:58:42.840899  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:42.841351  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:42.841377  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:42.841284  429868 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 20:58:44.032397  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:44.032978  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:44.033013  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:44.032898  429868 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 20:58:45.711875  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:45.712758  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:45.712790  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:45.712733  429868 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 20:58:46.628392  429159 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:58:46.628414  429159 addons.go:344] enableAddons completed in 1.068233981s
	I0813 20:58:46.852764  429159 system_pods.go:59] 1 kube-system pods found
	I0813 20:58:46.852801  429159 system_pods.go:61] "storage-provisioner" [405d8164-fc79-11eb-952b-52540050ef93] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:58:46.852821  429159 retry.go:31] will retry after 469.882201ms: only 1 pod(s) have shown up
	I0813 20:58:47.327835  429159 system_pods.go:59] 1 kube-system pods found
	I0813 20:58:47.327870  429159 system_pods.go:61] "storage-provisioner" [405d8164-fc79-11eb-952b-52540050ef93] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:58:47.327886  429159 retry.go:31] will retry after 667.365439ms: only 1 pod(s) have shown up
	I0813 20:58:48.000488  429159 system_pods.go:59] 1 kube-system pods found
	I0813 20:58:48.000521  429159 system_pods.go:61] "storage-provisioner" [405d8164-fc79-11eb-952b-52540050ef93] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:58:48.000536  429159 retry.go:31] will retry after 597.243124ms: only 1 pod(s) have shown up
	I0813 20:58:48.602933  429159 system_pods.go:59] 1 kube-system pods found
	I0813 20:58:48.602976  429159 system_pods.go:61] "storage-provisioner" [405d8164-fc79-11eb-952b-52540050ef93] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:58:48.602992  429159 retry.go:31] will retry after 789.889932ms: only 1 pod(s) have shown up
	I0813 20:58:49.397879  429159 system_pods.go:59] 1 kube-system pods found
	I0813 20:58:49.397919  429159 system_pods.go:61] "storage-provisioner" [405d8164-fc79-11eb-952b-52540050ef93] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:58:49.397935  429159 retry.go:31] will retry after 951.868007ms: only 1 pod(s) have shown up
	I0813 20:58:46.988660  429197 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.13-0: (3.973686242s)
	I0813 20:58:46.988694  429197 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 from cache
	I0813 20:58:46.988730  429197 cache_images.go:113] Successfully loaded all cached images
	I0813 20:58:46.988736  429197 cache_images.go:82] LoadImages completed in 42.290547836s
	I0813 20:58:46.988799  429197 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:58:47.022549  429197 cni.go:93] Creating CNI manager for ""
	I0813 20:58:47.022586  429197 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:58:47.022600  429197 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:58:47.022619  429197 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.177 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-20210813205520-393438 NodeName:running-upgrade-20210813205520-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.72.177 CgroupDrive
r:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:58:47.022828  429197 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "running-upgrade-20210813205520-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:58:47.023021  429197 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=running-upgrade-20210813205520-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.177 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20210813205520-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:58:47.023094  429197 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0813 20:58:47.041826  429197 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:58:47.041924  429197 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:58:47.067611  429197 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (553 bytes)
	I0813 20:58:47.092063  429197 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:58:47.111889  429197 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0813 20:58:47.127743  429197 ssh_runner.go:149] Run: grep 192.168.72.177	control-plane.minikube.internal$ /etc/hosts
	I0813 20:58:47.133422  429197 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813205520-393438 for IP: 192.168.72.177
	I0813 20:58:47.133483  429197 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:58:47.133502  429197 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:58:47.133579  429197 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813205520-393438/client.key
	I0813 20:58:47.133606  429197 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813205520-393438/apiserver.key.c6046103
	I0813 20:58:47.133630  429197 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813205520-393438/proxy-client.key
	I0813 20:58:47.133764  429197 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 20:58:47.133809  429197 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 20:58:47.133819  429197 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:58:47.133850  429197 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:58:47.133900  429197 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:58:47.133936  429197 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:58:47.134002  429197 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 20:58:47.135352  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813205520-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:58:47.158522  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813205520-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:58:47.195276  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813205520-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:58:47.224580  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813205520-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 20:58:47.255058  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:58:47.281845  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:58:47.320022  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:58:47.363658  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 20:58:47.398079  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 20:58:47.449256  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 20:58:47.476708  429197 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:58:47.512052  429197 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:58:47.544298  429197 ssh_runner.go:149] Run: openssl version
	I0813 20:58:47.557596  429197 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:58:47.585950  429197 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:47.596416  429197 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:47.596473  429197 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:58:47.607869  429197 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:58:47.624505  429197 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 20:58:47.652017  429197 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 20:58:47.667525  429197 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 20:58:47.667582  429197 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 20:58:47.681920  429197 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 20:58:47.711458  429197 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 20:58:47.725869  429197 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 20:58:47.737390  429197 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 20:58:47.737433  429197 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 20:58:47.750909  429197 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:58:47.767677  429197 kubeadm.go:390] StartCluster: {Name:running-upgrade-20210813205520-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName
:running-upgrade-20210813205520-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.177 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:58:47.767802  429197 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:58:47.767851  429197 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:58:47.811226  429197 cri.go:76] found id: "c43ab17b0774aa10c9d1fbb1dabb095409e9b3a29f56184ee03392a8cf466a30"
	I0813 20:58:47.811254  429197 cri.go:76] found id: "e17e08642ccf3029455c05f9ef2d581f6b84ad45d09b4c75a52093648d49be28"
	I0813 20:58:47.811263  429197 cri.go:76] found id: "3affbef61aea25f1234c1c861ef47b68217e5d89707a851f56ebe78aa142ca28"
	I0813 20:58:47.811270  429197 cri.go:76] found id: "ff4beae820c4c70b8321364f2c71658defd266b1c4c7160a5cb29f2e4af7a0d1"
	I0813 20:58:47.811277  429197 cri.go:76] found id: "3f2845c78fc70aa002ff28cda11b386b7d144b2d7bcb8d5a58972d5e50f5255f"
	I0813 20:58:47.811283  429197 cri.go:76] found id: "1d53355ae91096bb45667d179bb9f9c4ac70ddb1f821dfa7689260d62e6c5eeb"
	I0813 20:58:47.811290  429197 cri.go:76] found id: "0f31470cf99d28981b50b7be4e4f2798bbfd4eae25eadc0a92f709971a55af27"
	I0813 20:58:47.811296  429197 cri.go:76] found id: ""
	I0813 20:58:47.811346  429197 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:58:47.869131  429197 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0e1462e69f29b3639ec31e5f2b822a9f5930b8289100d566fa75667da158dc0f","pid":3066,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/0e1462e69f29b3639ec31e5f2b822a9f5930b8289100d566fa75667da158dc0f","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/0e1462e69f29b3639ec31e5f2b822a9f5930b8289100d566fa75667da158dc0f/rootfs","created":"2021-08-13T20:57:54.026039347Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0e1462e69f29b3639ec31e5f2b822a9f5930b8289100d566fa75667da158dc0f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_11ab50e9-0e9e-4491-8ebc-67c7a715291f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0e8921c6fa47913c43dd39c47d848e7a6dd968b52f53d46fe799da29fd86f47b","pid":2799,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/0e8921c6fa47913c43dd39c47d848e7a6
dd968b52f53d46fe799da29fd86f47b","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/0e8921c6fa47913c43dd39c47d848e7a6dd968b52f53d46fe799da29fd86f47b/rootfs","created":"2021-08-13T20:57:49.017554987Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0e8921c6fa47913c43dd39c47d848e7a6dd968b52f53d46fe799da29fd86f47b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-n27fb_bc2792f7-9b16-46cb-935f-f338306bfa30"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f31470cf99d28981b50b7be4e4f2798bbfd4eae25eadc0a92f709971a55af27","pid":2593,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/0f31470cf99d28981b50b7be4e4f2798bbfd4eae25eadc0a92f709971a55af27","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/0f31470cf99d28981b50b7be4e4f2798bbfd4eae25eadc0a92f709971a55af27/rootfs","created":"2021-08-13T20:57:24.380666543Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kub
ernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"89a93112d35ab17ccb348fe762f756a1a804c0c7a83f28eb68540ae1eb078386"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1d53355ae91096bb45667d179bb9f9c4ac70ddb1f821dfa7689260d62e6c5eeb","pid":2553,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/1d53355ae91096bb45667d179bb9f9c4ac70ddb1f821dfa7689260d62e6c5eeb","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/1d53355ae91096bb45667d179bb9f9c4ac70ddb1f821dfa7689260d62e6c5eeb/rootfs","created":"2021-08-13T20:57:24.281917637Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"38160f60d9956f4a390631057fcedc5d21aeeb402f067fa5c96ba6a1b9b94e6b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"38160f60d9956f4a390631057fcedc5d21aeeb402f067fa5c96ba6a1b9b94e6b","pid":2474,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/38160f60d9
956f4a390631057fcedc5d21aeeb402f067fa5c96ba6a1b9b94e6b","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/38160f60d9956f4a390631057fcedc5d21aeeb402f067fa5c96ba6a1b9b94e6b/rootfs","created":"2021-08-13T20:57:23.314719001Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"38160f60d9956f4a390631057fcedc5d21aeeb402f067fa5c96ba6a1b9b94e6b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-20210813205520-393438_3478da2c440ba32fb6c087b3f3b99813"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3affbef61aea25f1234c1c861ef47b68217e5d89707a851f56ebe78aa142ca28","pid":2829,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/3affbef61aea25f1234c1c861ef47b68217e5d89707a851f56ebe78aa142ca28","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/3affbef61aea25f1234c1c861ef47b68217e5d89707a851f56ebe78aa142ca28/rootfs","created":"2021-08-13T20:57:49.276286077Z","annotations":{"io.
kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0e8921c6fa47913c43dd39c47d848e7a6dd968b52f53d46fe799da29fd86f47b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f2845c78fc70aa002ff28cda11b386b7d144b2d7bcb8d5a58972d5e50f5255f","pid":2608,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/3f2845c78fc70aa002ff28cda11b386b7d144b2d7bcb8d5a58972d5e50f5255f","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/3f2845c78fc70aa002ff28cda11b386b7d144b2d7bcb8d5a58972d5e50f5255f/rootfs","created":"2021-08-13T20:57:24.457425933Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"89f1f2c0c6f36d7b3f1d6b81b05c103f4928c211cd289aa414b5ab8150353054"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6870ee5b4239d614db6ee600b7fe7b664c71ae99f2f850c62cc48b0efbf3e85c","pid":2459,"status":"running","bundle":"/run
/containerd/io.containerd.runtime.v1.linux/k8s.io/6870ee5b4239d614db6ee600b7fe7b664c71ae99f2f850c62cc48b0efbf3e85c","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/6870ee5b4239d614db6ee600b7fe7b664c71ae99f2f850c62cc48b0efbf3e85c/rootfs","created":"2021-08-13T20:57:23.304768241Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"6870ee5b4239d614db6ee600b7fe7b664c71ae99f2f850c62cc48b0efbf3e85c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-running-upgrade-20210813205520-393438_4baedbe3c5adfb275c862d08647c9d7c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"89a93112d35ab17ccb348fe762f756a1a804c0c7a83f28eb68540ae1eb078386","pid":2437,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/89a93112d35ab17ccb348fe762f756a1a804c0c7a83f28eb68540ae1eb078386","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/89a93112d35ab17ccb348fe762f756a1a804c0c7a83f28eb68540ae1eb078386/rootfs","created":"2
021-08-13T20:57:23.157193317Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"89a93112d35ab17ccb348fe762f756a1a804c0c7a83f28eb68540ae1eb078386","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-20210813205520-393438_2c66c49fc02b209e1c6ab751769e1740"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"89f1f2c0c6f36d7b3f1d6b81b05c103f4928c211cd289aa414b5ab8150353054","pid":2457,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/89f1f2c0c6f36d7b3f1d6b81b05c103f4928c211cd289aa414b5ab8150353054","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/89f1f2c0c6f36d7b3f1d6b81b05c103f4928c211cd289aa414b5ab8150353054/rootfs","created":"2021-08-13T20:57:23.308864324Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"89f1f2c0c6f36d7b3f1d6b81b05c103f4928c211cd289aa414b5ab8150353054","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-syst
em_kube-controller-manager-running-upgrade-20210813205520-393438_f8d3d61ad8d45c80ab92bcedbe7fdb7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a271f10af45a8439126cd8be1b83d4d5dbdb50e19b186bfd526b13f18277cc3c","pid":2976,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/a271f10af45a8439126cd8be1b83d4d5dbdb50e19b186bfd526b13f18277cc3c","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/a271f10af45a8439126cd8be1b83d4d5dbdb50e19b186bfd526b13f18277cc3c/rootfs","created":"2021-08-13T20:57:51.894915758Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a271f10af45a8439126cd8be1b83d4d5dbdb50e19b186bfd526b13f18277cc3c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-74ff55c5b-ttqx2_5fa519a7-e077-42ec-8813-20a70746db5d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c43ab17b0774aa10c9d1fbb1dabb095409e9b3a29f56184ee03392a8cf466a30","pid":3098,"status":"running","bundle":"/run/containerd/io.contain
erd.runtime.v1.linux/k8s.io/c43ab17b0774aa10c9d1fbb1dabb095409e9b3a29f56184ee03392a8cf466a30","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/c43ab17b0774aa10c9d1fbb1dabb095409e9b3a29f56184ee03392a8cf466a30/rootfs","created":"2021-08-13T20:57:54.598753561Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0e1462e69f29b3639ec31e5f2b822a9f5930b8289100d566fa75667da158dc0f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e17e08642ccf3029455c05f9ef2d581f6b84ad45d09b4c75a52093648d49be28","pid":3006,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/e17e08642ccf3029455c05f9ef2d581f6b84ad45d09b4c75a52093648d49be28","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/e17e08642ccf3029455c05f9ef2d581f6b84ad45d09b4c75a52093648d49be28/rootfs","created":"2021-08-13T20:57:52.415569965Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri
.container-type":"container","io.kubernetes.cri.sandbox-id":"a271f10af45a8439126cd8be1b83d4d5dbdb50e19b186bfd526b13f18277cc3c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ff4beae820c4c70b8321364f2c71658defd266b1c4c7160a5cb29f2e4af7a0d1","pid":2623,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/ff4beae820c4c70b8321364f2c71658defd266b1c4c7160a5cb29f2e4af7a0d1","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/ff4beae820c4c70b8321364f2c71658defd266b1c4c7160a5cb29f2e4af7a0d1/rootfs","created":"2021-08-13T20:57:24.559344296Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6870ee5b4239d614db6ee600b7fe7b664c71ae99f2f850c62cc48b0efbf3e85c"},"owner":"root"}]
	I0813 20:58:47.869464  429197 cri.go:113] list returned 14 containers
	I0813 20:58:47.869484  429197 cri.go:116] container: {ID:0e1462e69f29b3639ec31e5f2b822a9f5930b8289100d566fa75667da158dc0f Status:running}
	I0813 20:58:47.869499  429197 cri.go:118] skipping 0e1462e69f29b3639ec31e5f2b822a9f5930b8289100d566fa75667da158dc0f - not in ps
	I0813 20:58:47.869509  429197 cri.go:116] container: {ID:0e8921c6fa47913c43dd39c47d848e7a6dd968b52f53d46fe799da29fd86f47b Status:running}
	I0813 20:58:47.869516  429197 cri.go:118] skipping 0e8921c6fa47913c43dd39c47d848e7a6dd968b52f53d46fe799da29fd86f47b - not in ps
	I0813 20:58:47.869525  429197 cri.go:116] container: {ID:0f31470cf99d28981b50b7be4e4f2798bbfd4eae25eadc0a92f709971a55af27 Status:running}
	I0813 20:58:47.869533  429197 cri.go:122] skipping {0f31470cf99d28981b50b7be4e4f2798bbfd4eae25eadc0a92f709971a55af27 running}: state = "running", want "paused"
	I0813 20:58:47.869553  429197 cri.go:116] container: {ID:1d53355ae91096bb45667d179bb9f9c4ac70ddb1f821dfa7689260d62e6c5eeb Status:running}
	I0813 20:58:47.869560  429197 cri.go:122] skipping {1d53355ae91096bb45667d179bb9f9c4ac70ddb1f821dfa7689260d62e6c5eeb running}: state = "running", want "paused"
	I0813 20:58:47.869567  429197 cri.go:116] container: {ID:38160f60d9956f4a390631057fcedc5d21aeeb402f067fa5c96ba6a1b9b94e6b Status:running}
	I0813 20:58:47.869573  429197 cri.go:118] skipping 38160f60d9956f4a390631057fcedc5d21aeeb402f067fa5c96ba6a1b9b94e6b - not in ps
	I0813 20:58:47.869579  429197 cri.go:116] container: {ID:3affbef61aea25f1234c1c861ef47b68217e5d89707a851f56ebe78aa142ca28 Status:running}
	I0813 20:58:47.869586  429197 cri.go:122] skipping {3affbef61aea25f1234c1c861ef47b68217e5d89707a851f56ebe78aa142ca28 running}: state = "running", want "paused"
	I0813 20:58:47.869595  429197 cri.go:116] container: {ID:3f2845c78fc70aa002ff28cda11b386b7d144b2d7bcb8d5a58972d5e50f5255f Status:running}
	I0813 20:58:47.869603  429197 cri.go:122] skipping {3f2845c78fc70aa002ff28cda11b386b7d144b2d7bcb8d5a58972d5e50f5255f running}: state = "running", want "paused"
	I0813 20:58:47.869614  429197 cri.go:116] container: {ID:6870ee5b4239d614db6ee600b7fe7b664c71ae99f2f850c62cc48b0efbf3e85c Status:running}
	I0813 20:58:47.869621  429197 cri.go:118] skipping 6870ee5b4239d614db6ee600b7fe7b664c71ae99f2f850c62cc48b0efbf3e85c - not in ps
	I0813 20:58:47.869630  429197 cri.go:116] container: {ID:89a93112d35ab17ccb348fe762f756a1a804c0c7a83f28eb68540ae1eb078386 Status:running}
	I0813 20:58:47.869637  429197 cri.go:118] skipping 89a93112d35ab17ccb348fe762f756a1a804c0c7a83f28eb68540ae1eb078386 - not in ps
	I0813 20:58:47.869644  429197 cri.go:116] container: {ID:89f1f2c0c6f36d7b3f1d6b81b05c103f4928c211cd289aa414b5ab8150353054 Status:running}
	I0813 20:58:47.869651  429197 cri.go:118] skipping 89f1f2c0c6f36d7b3f1d6b81b05c103f4928c211cd289aa414b5ab8150353054 - not in ps
	I0813 20:58:47.869657  429197 cri.go:116] container: {ID:a271f10af45a8439126cd8be1b83d4d5dbdb50e19b186bfd526b13f18277cc3c Status:running}
	I0813 20:58:47.869672  429197 cri.go:118] skipping a271f10af45a8439126cd8be1b83d4d5dbdb50e19b186bfd526b13f18277cc3c - not in ps
	I0813 20:58:47.869680  429197 cri.go:116] container: {ID:c43ab17b0774aa10c9d1fbb1dabb095409e9b3a29f56184ee03392a8cf466a30 Status:running}
	I0813 20:58:47.869687  429197 cri.go:122] skipping {c43ab17b0774aa10c9d1fbb1dabb095409e9b3a29f56184ee03392a8cf466a30 running}: state = "running", want "paused"
	I0813 20:58:47.869695  429197 cri.go:116] container: {ID:e17e08642ccf3029455c05f9ef2d581f6b84ad45d09b4c75a52093648d49be28 Status:running}
	I0813 20:58:47.869701  429197 cri.go:122] skipping {e17e08642ccf3029455c05f9ef2d581f6b84ad45d09b4c75a52093648d49be28 running}: state = "running", want "paused"
	I0813 20:58:47.869708  429197 cri.go:116] container: {ID:ff4beae820c4c70b8321364f2c71658defd266b1c4c7160a5cb29f2e4af7a0d1 Status:running}
	I0813 20:58:47.869715  429197 cri.go:122] skipping {ff4beae820c4c70b8321364f2c71658defd266b1c4c7160a5cb29f2e4af7a0d1 running}: state = "running", want "paused"
	I0813 20:58:47.869771  429197 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:58:47.879668  429197 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 20:58:47.879705  429197 kubeadm.go:600] restartCluster start
	I0813 20:58:47.879757  429197 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 20:58:47.888606  429197 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:58:47.889750  429197 kubeconfig.go:117] verify returned: extract IP: "running-upgrade-20210813205520-393438" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:58:47.890252  429197 kubeconfig.go:128] "running-upgrade-20210813205520-393438" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 20:58:47.891314  429197 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:58:47.892631  429197 kapi.go:59] client config for running-upgrade-20210813205520-393438: &rest.Config{Host:"https://192.168.72.177:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813205520-393438/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles
/running-upgrade-20210813205520-393438/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:58:47.895096  429197 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 20:58:47.903883  429197 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -65,4 +65,10 @@
	 apiVersion: kubeproxy.config.k8s.io/v1alpha1
	 kind: KubeProxyConfiguration
	 clusterCIDR: "10.244.0.0/16"
	-metricsBindAddress: 192.168.72.177:10249
	+metricsBindAddress: 0.0.0.0:10249
	+conntrack:
	+  maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0813 20:58:47.903911  429197 kubeadm.go:1032] stopping kube-system containers ...
	I0813 20:58:47.903927  429197 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 20:58:47.903973  429197 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:58:47.931446  429197 cri.go:76] found id: "c43ab17b0774aa10c9d1fbb1dabb095409e9b3a29f56184ee03392a8cf466a30"
	I0813 20:58:47.931466  429197 cri.go:76] found id: "e17e08642ccf3029455c05f9ef2d581f6b84ad45d09b4c75a52093648d49be28"
	I0813 20:58:47.931475  429197 cri.go:76] found id: "3affbef61aea25f1234c1c861ef47b68217e5d89707a851f56ebe78aa142ca28"
	I0813 20:58:47.931484  429197 cri.go:76] found id: "ff4beae820c4c70b8321364f2c71658defd266b1c4c7160a5cb29f2e4af7a0d1"
	I0813 20:58:47.931489  429197 cri.go:76] found id: "3f2845c78fc70aa002ff28cda11b386b7d144b2d7bcb8d5a58972d5e50f5255f"
	I0813 20:58:47.931494  429197 cri.go:76] found id: "1d53355ae91096bb45667d179bb9f9c4ac70ddb1f821dfa7689260d62e6c5eeb"
	I0813 20:58:47.931507  429197 cri.go:76] found id: "0f31470cf99d28981b50b7be4e4f2798bbfd4eae25eadc0a92f709971a55af27"
	I0813 20:58:47.931514  429197 cri.go:76] found id: ""
	I0813 20:58:47.931522  429197 cri.go:221] Stopping containers: [c43ab17b0774aa10c9d1fbb1dabb095409e9b3a29f56184ee03392a8cf466a30 e17e08642ccf3029455c05f9ef2d581f6b84ad45d09b4c75a52093648d49be28 3affbef61aea25f1234c1c861ef47b68217e5d89707a851f56ebe78aa142ca28 ff4beae820c4c70b8321364f2c71658defd266b1c4c7160a5cb29f2e4af7a0d1 3f2845c78fc70aa002ff28cda11b386b7d144b2d7bcb8d5a58972d5e50f5255f 1d53355ae91096bb45667d179bb9f9c4ac70ddb1f821dfa7689260d62e6c5eeb 0f31470cf99d28981b50b7be4e4f2798bbfd4eae25eadc0a92f709971a55af27]
	I0813 20:58:47.931569  429197 ssh_runner.go:149] Run: which crictl
	I0813 20:58:47.958714  429197 ssh_runner.go:149] Run: sudo /bin/crictl stop c43ab17b0774aa10c9d1fbb1dabb095409e9b3a29f56184ee03392a8cf466a30 e17e08642ccf3029455c05f9ef2d581f6b84ad45d09b4c75a52093648d49be28 3affbef61aea25f1234c1c861ef47b68217e5d89707a851f56ebe78aa142ca28 ff4beae820c4c70b8321364f2c71658defd266b1c4c7160a5cb29f2e4af7a0d1 3f2845c78fc70aa002ff28cda11b386b7d144b2d7bcb8d5a58972d5e50f5255f 1d53355ae91096bb45667d179bb9f9c4ac70ddb1f821dfa7689260d62e6c5eeb 0f31470cf99d28981b50b7be4e4f2798bbfd4eae25eadc0a92f709971a55af27
	I0813 20:58:48.061021  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:48.061501  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:48.061531  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:48.061451  429868 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 20:58:51.431654  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:51.432173  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | unable to find current IP address of domain force-systemd-env-20210813205836-393438 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:58:51.432212  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | I0813 20:58:51.432125  429868 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 20:58:50.354953  429159 system_pods.go:59] 1 kube-system pods found
	I0813 20:58:50.354991  429159 system_pods.go:61] "storage-provisioner" [405d8164-fc79-11eb-952b-52540050ef93] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:58:50.355010  429159 retry.go:31] will retry after 1.341783893s: only 1 pod(s) have shown up
	I0813 20:58:51.702107  429159 system_pods.go:59] 1 kube-system pods found
	I0813 20:58:51.702145  429159 system_pods.go:61] "storage-provisioner" [405d8164-fc79-11eb-952b-52540050ef93] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:58:51.702160  429159 retry.go:31] will retry after 1.876813009s: only 1 pod(s) have shown up
	I0813 20:58:53.585061  429159 system_pods.go:59] 1 kube-system pods found
	I0813 20:58:53.585098  429159 system_pods.go:61] "storage-provisioner" [405d8164-fc79-11eb-952b-52540050ef93] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0813 20:58:53.585115  429159 retry.go:31] will retry after 2.6934314s: only 1 pod(s) have shown up
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	33fae69af6bcf       6e38f40d628db       30 seconds ago       Exited              storage-provisioner       0                   76aee79f917be
	b6372d9d76486       296a6d5035e2d       44 seconds ago       Running             coredns                   1                   cfc4c8785e479
	afabb5f130410       0369cf4303ffd       45 seconds ago       Running             etcd                      1                   3f41ec729ef71
	57f3f32f280d8       bc2bb319a7038       45 seconds ago       Running             kube-controller-manager   1                   ce1823a3db17a
	1053b5b4ba3ab       3d174f00aa39e       46 seconds ago       Running             kube-apiserver            1                   a655f217cf1c5
	0d1a942c8b8c2       adb2816ea823a       46 seconds ago       Running             kube-proxy                1                   47e050012dbca
	1d84b053549cf       6be0dc1302e30       46 seconds ago       Running             kube-scheduler            1                   53f314c6cf963
	1bba0d6deb033       adb2816ea823a       About a minute ago   Exited              kube-proxy                0                   3f6f239c2851f
	63c0cc1fc4c0c       296a6d5035e2d       About a minute ago   Exited              coredns                   0                   b1f1f31f28005
	698bbea7ce6e9       6be0dc1302e30       2 minutes ago        Exited              kube-scheduler            0                   5a66336a35add
	df02c38abac90       0369cf4303ffd       2 minutes ago        Exited              etcd                      0                   4cf745987f602
	68bad43283064       bc2bb319a7038       2 minutes ago        Exited              kube-controller-manager   0                   5340b4aa5ca39
	11c2753c9a8a7       3d174f00aa39e       2 minutes ago        Exited              kube-apiserver            0                   304b611d719ea
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:55:52 UTC, end at Fri 2021-08-13 20:58:56 UTC. --
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.078142311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-pause-20210813205520-393438,Uid:86a000e5c08d32d80b2fd4e89cd34dd1,Namespace:kube-system,Attempt:1,} returns sandbox id \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.145266794Z" level=info msg="CreateContainer within sandbox \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\" for container &ContainerMetadata{Name:etcd,Attempt:1,}"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.321521915Z" level=info msg="StartContainer for \"1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c\" returns successfully"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.349622186Z" level=info msg="CreateContainer within sandbox \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\" for &ContainerMetadata{Name:etcd,Attempt:1,} returns container id \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.353268082Z" level=info msg="StartContainer for \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.376810925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-jzmnb,Uid:ea00ae4c-f4d9-414c-8762-6314a96c8a06,Namespace:kube-system,Attempt:1,} returns sandbox id \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.451595226Z" level=info msg="CreateContainer within sandbox \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.633919582Z" level=info msg="CreateContainer within sandbox \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.635324605Z" level=info msg="StartContainer for \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.770314446Z" level=info msg="StartContainer for \"57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849\" returns successfully"
	Aug 13 20:58:12 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:12.016041628Z" level=info msg="StartContainer for \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\" returns successfully"
	Aug 13 20:58:12 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:12.229109322Z" level=info msg="StartContainer for \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\" returns successfully"
	Aug 13 20:58:15 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:15.472167045Z" level=info msg="StartContainer for \"0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5\" returns successfully"
	Aug 13 20:58:25 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:25.856093567Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:99920d7c-bb8d-4c65-bf44-b56f23a40e53,Namespace:kube-system,Attempt:0,}"
	Aug 13 20:58:25 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:25.901091488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a pid=4886
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.481756294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:99920d7c-bb8d-4c65-bf44-b56f23a40e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.492027606Z" level=info msg="CreateContainer within sandbox \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.607213854Z" level=info msg="CreateContainer within sandbox \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.614295374Z" level=info msg="StartContainer for \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.876068804Z" level=info msg="StartContainer for \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\" returns successfully"
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.156236073Z" level=info msg="Finish piping stderr of container \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.158102368Z" level=info msg="Finish piping stdout of container \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.159567062Z" level=info msg="TaskExit event &TaskExit{ContainerID:33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81,ID:33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81,Pid:4945,ExitStatus:255,ExitedAt:2021-08-13 20:58:41.157732657 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.217770540Z" level=info msg="shim disconnected" id=33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.217941244Z" level=error msg="copy shim log" error="read /proc/self/fd/98: file already closed"
	
	* 
	* ==> coredns [63c0cc1fc4c0cb78fac8fe29e80eed8b43fa6762ce189d85564911aed6114ba0] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 6b95276539722f40f4545af91578505c
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0813 20:57:49.980199       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.978) (total time: 30001ms):
	Trace[2019727887]: [30.001847928s] [30.001847928s] END
	E0813 20:57:49.980279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:57:49.980655       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.975) (total time: 30005ms):
	Trace[939984059]: [30.00501838s] [30.00501838s] END
	E0813 20:57:49.980691       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:57:49.981307       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.975) (total time: 30005ms):
	Trace[911902081]: [30.005916603s] [30.005916603s] END
	E0813 20:57:49.981521       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d] <==
	* E0813 20:58:20.310855       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 6b95276539722f40f4545af91578505c
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	*                 "trace_clock=local"
	              on the kernel command line
	[  +0.000017] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.863604] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.032050] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.917916] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1722 comm=systemd-network
	[  +2.669268] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.335717] vboxguest: loading out-of-tree module taints kernel.
	[  +0.008488] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 20:56] systemd-fstab-generator[2101]: Ignoring "noauto" for root device
	[  +0.927578] systemd-fstab-generator[2132]: Ignoring "noauto" for root device
	[  +0.140064] systemd-fstab-generator[2146]: Ignoring "noauto" for root device
	[  +0.195734] systemd-fstab-generator[2179]: Ignoring "noauto" for root device
	[  +8.321149] systemd-fstab-generator[2386]: Ignoring "noauto" for root device
	[Aug13 20:57] systemd-fstab-generator[2823]: Ignoring "noauto" for root device
	[ +16.072552] kauditd_printk_skb: 38 callbacks suppressed
	[ +34.372009] kauditd_printk_skb: 116 callbacks suppressed
	[  +3.958113] NFSD: Unable to end grace period: -110
	[Aug13 20:58] systemd-fstab-generator[3706]: Ignoring "noauto" for root device
	[  +0.206181] systemd-fstab-generator[3719]: Ignoring "noauto" for root device
	[  +0.261980] systemd-fstab-generator[3744]: Ignoring "noauto" for root device
	[ +19.584639] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.482860] systemd-fstab-generator[4981]: Ignoring "noauto" for root device
	[  +0.846439] systemd-fstab-generator[5035]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5] <==
	* 2021-08-13 20:58:16.461857 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:5" took too long (198.960862ms) to execute
	2021-08-13 20:58:16.462013 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 " with result "range_response_count:0 size:5" took too long (199.025411ms) to execute
	2021-08-13 20:58:16.462116 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (190.42222ms) to execute
	2021-08-13 20:58:16.462337 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (179.184455ms) to execute
	2021-08-13 20:58:16.462702 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (172.711746ms) to execute
	2021-08-13 20:58:16.462925 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (170.528555ms) to execute
	2021-08-13 20:58:16.463221 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (158.293847ms) to execute
	2021-08-13 20:58:16.463747 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (158.490371ms) to execute
	2021-08-13 20:58:16.464124 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 " with result "range_response_count:0 size:5" took too long (152.464331ms) to execute
	2021-08-13 20:58:16.477058 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (151.343452ms) to execute
	2021-08-13 20:58:16.478005 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true " with result "range_response_count:0 size:5" took too long (142.028022ms) to execute
	2021-08-13 20:58:16.478939 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" limit:10000 " with result "range_response_count:0 size:5" took too long (142.259692ms) to execute
	2021-08-13 20:58:16.479721 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 " with result "range_response_count:0 size:5" took too long (129.328346ms) to execute
	2021-08-13 20:58:16.479967 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (126.882803ms) to execute
	2021-08-13 20:58:16.480303 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 " with result "range_response_count:11 size:5977" took too long (116.866258ms) to execute
	2021-08-13 20:58:16.480852 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true " with result "range_response_count:0 size:7" took too long (116.970061ms) to execute
	2021-08-13 20:58:23.354247 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/cluster-admin\" " with result "range_response_count:1 size:718" took too long (1.914180768s) to execute
	2021-08-13 20:58:23.356685 W | etcdserver: request "header:<ID:14244176716868856811 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-20210813205520-393438.169af9452389bd61\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-20210813205520-393438.169af9452389bd61\" value_size:717 lease:5020804680014080881 >> failure:<>>" with result "size:16" took too long (1.23562281s) to execute
	2021-08-13 20:58:23.370142 W | wal: sync duration of 1.250273887s, expected less than 1s
	2021-08-13 20:58:23.370676 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.152835664s) to execute
	2021-08-13 20:58:23.371565 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.728436243s) to execute
	2021-08-13 20:58:23.371769 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.847351028s) to execute
	2021-08-13 20:58:23.378962 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-jzmnb\" " with result "range_response_count:1 size:4862" took too long (671.753147ms) to execute
	2021-08-13 20:58:24.705568 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813205520-393438\" " with result "range_response_count:1 size:4394" took too long (221.501911ms) to execute
	2021-08-13 20:58:26.341296 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [df02c38abac90e1bfb1eaa8433ba9faac330d654e786d0c41901507b55d0c418] <==
	* 2021-08-13 20:56:51.867973 I | embed: serving client requests on 192.168.61.151:2379
	2021-08-13 20:56:51.875825 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:57:01.271062 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" " with result "range_response_count:0 size:5" took too long (480.2351ms) to execute
	2021-08-13 20:57:01.272131 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813205520-393438\" " with result "range_response_count:1 size:3758" took too long (875.676682ms) to execute
	2021-08-13 20:57:01.273551 W | etcdserver: read-only range request "key:\"/registry/events/default/pause-20210813205520-393438.169af930771f12dc\" " with result "range_response_count:1 size:735" took too long (792.283833ms) to execute
	2021-08-13 20:57:02.171621 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (872.818648ms) to execute
	2021-08-13 20:57:02.172160 W | etcdserver: request "header:<ID:14244176716848216677 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813205520-393438\" mod_revision:222 > success:<request_put:<key:\"/registry/minions/pause-20210813205520-393438\" value_size:3993 >> failure:<request_range:<key:\"/registry/minions/pause-20210813205520-393438\" > >>" with result "size:16" took too long (128.660032ms) to execute
	2021-08-13 20:57:02.172330 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:351" took too long (871.615956ms) to execute
	2021-08-13 20:57:02.172733 W | etcdserver: read-only range request "key:\"/registry/events/default/pause-20210813205520-393438.169af930771f2f58\" " with result "range_response_count:1 size:733" took too long (859.92991ms) to execute
	2021-08-13 20:57:02.172849 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813205520-393438\" " with result "range_response_count:0 size:5" took too long (853.236151ms) to execute
	2021-08-13 20:57:09.290631 W | etcdserver: request "header:<ID:14244176716848216792 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813205520-393438\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/pause-20210813205520-393438\" value_size:3277 >> failure:<>>" with result "size:5" took too long (472.704737ms) to execute
	2021-08-13 20:57:09.291659 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/pause-20210813205520-393438\" " with result "range_response_count:0 size:5" took too long (897.879132ms) to execute
	2021-08-13 20:57:09.298807 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210813205520-393438\" " with result "range_response_count:1 size:4986" took too long (528.421007ms) to execute
	2021-08-13 20:57:09.299124 W | etcdserver: read-only range request "key:\"/registry/csinodes/pause-20210813205520-393438\" " with result "range_response_count:1 size:656" took too long (894.254864ms) to execute
	2021-08-13 20:57:13.314052 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:210" took too long (127.466898ms) to execute
	2021-08-13 20:57:13.314663 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (132.387511ms) to execute
	2021-08-13 20:57:16.343764 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:20.988739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:30.989151 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:39.442816 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:422" took too long (120.094417ms) to execute
	2021-08-13 20:57:40.988900 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:50.989064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:58:00.244154 W | etcdserver: request "header:<ID:14244176716848217456 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.151\" mod_revision:483 > success:<request_put:<key:\"/registry/masterleases/192.168.61.151\" value_size:69 lease:5020804679993441646 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.151\" > >>" with result "size:16" took too long (162.220853ms) to execute
	2021-08-13 20:58:00.245134 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (881.389444ms) to execute
	2021-08-13 20:58:00.989778 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:59:06 up 3 min,  0 users,  load average: 1.26, 0.86, 0.35
	Linux pause-20210813205520-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c] <==
	* I0813 20:58:20.351321       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 20:58:20.372737       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0813 20:58:20.375890       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 20:58:20.387225       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 20:58:20.401103       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:58:20.403283       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 20:58:20.407207       1 cache.go:39] Caches are synced for autoregister controller
	I0813 20:58:20.410957       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 20:58:21.065658       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 20:58:21.066635       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 20:58:21.090819       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 20:58:23.358425       1 trace.go:205] Trace[1442514083]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:58:21.628) (total time: 1729ms):
	Trace[1442514083]: ---"Object stored in database" 1729ms (20:58:00.358)
	Trace[1442514083]: [1.729557914s] [1.729557914s] END
	I0813 20:58:23.359893       1 trace.go:205] Trace[553017594]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 20:58:21.438) (total time: 1920ms):
	Trace[553017594]: ---"About to write a response" 1919ms (20:58:00.358)
	Trace[553017594]: [1.920866407s] [1.920866407s] END
	I0813 20:58:23.381663       1 trace.go:205] Trace[1143050190]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-jzmnb,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:58:22.699) (total time: 682ms):
	Trace[1143050190]: ---"About to write a response" 681ms (20:58:00.380)
	Trace[1143050190]: [682.310081ms] [682.310081ms] END
	I0813 20:58:25.230359       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:58:25.281700       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:58:25.373725       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 20:58:25.413105       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 20:58:25.560667       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-apiserver [11c2753c9a8a79ebfb2fe156a698be51aed9e9d6ac5dfc0af27d0a4822c7d016] <==
	* I0813 20:57:09.309542       1 trace.go:205] Trace[2046907584]: "Create" url:/api/v1/nodes,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.501) (total time: 806ms):
	Trace[2046907584]: [806.482297ms] [806.482297ms] END
	I0813 20:57:09.310802       1 trace.go:205] Trace[146959614]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.771) (total time: 538ms):
	Trace[146959614]: ---"Object stored in database" 538ms (20:57:00.310)
	Trace[146959614]: [538.954794ms] [538.954794ms] END
	I0813 20:57:09.311138       1 trace.go:205] Trace[1128950750]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-20210813205520-393438,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 541ms):
	Trace[1128950750]: ---"About to write a response" 537ms (20:57:00.307)
	Trace[1128950750]: [541.267103ms] [541.267103ms] END
	I0813 20:57:09.311248       1 trace.go:205] Trace[1268223707]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 541ms):
	Trace[1268223707]: ---"Object stored in database" 540ms (20:57:00.310)
	Trace[1268223707]: [541.971563ms] [541.971563ms] END
	I0813 20:57:09.311433       1 trace.go:205] Trace[1977445463]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.772) (total time: 538ms):
	Trace[1977445463]: ---"Object stored in database" 537ms (20:57:00.310)
	Trace[1977445463]: [538.348208ms] [538.348208ms] END
	I0813 20:57:09.321803       1 trace.go:205] Trace[494614999]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 552ms):
	Trace[494614999]: [552.453895ms] [552.453895ms] END
	I0813 20:57:09.345220       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:57:16.259955       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:57:16.380865       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:57:37.272234       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:57:37.272418       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:57:37.272507       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:58:00.246413       1 trace.go:205] Trace[1997979141]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (13-Aug-2021 20:57:59.258) (total time: 987ms):
	Trace[1997979141]: ---"Transaction committed" 984ms (20:58:00.246)
	Trace[1997979141]: [987.521712ms] [987.521712ms] END
	
	* 
	* ==> kube-controller-manager [57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849] <==
	* I0813 20:58:25.074041       1 daemon_controller.go:285] Starting daemon sets controller
	I0813 20:58:25.074050       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I0813 20:58:25.116517       1 controllermanager.go:574] Started "horizontalpodautoscaling"
	I0813 20:58:25.116556       1 horizontal.go:169] Starting HPA controller
	I0813 20:58:25.116758       1 shared_informer.go:240] Waiting for caches to sync for HPA
	E0813 20:58:25.120839       1 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	W0813 20:58:25.120857       1 controllermanager.go:566] Skipping "service"
	I0813 20:58:25.124370       1 controllermanager.go:574] Started "persistentvolume-expander"
	I0813 20:58:25.124569       1 expand_controller.go:327] Starting expand controller
	I0813 20:58:25.124579       1 shared_informer.go:240] Waiting for caches to sync for expand
	I0813 20:58:25.175876       1 controllermanager.go:574] Started "namespace"
	I0813 20:58:25.176251       1 namespace_controller.go:200] Starting namespace controller
	I0813 20:58:25.176376       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0813 20:58:25.185657       1 controllermanager.go:574] Started "serviceaccount"
	I0813 20:58:25.187325       1 serviceaccounts_controller.go:117] Starting service account controller
	I0813 20:58:25.187340       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0813 20:58:25.192151       1 controllermanager.go:574] Started "replicaset"
	I0813 20:58:25.192315       1 replica_set.go:182] Starting replicaset controller
	I0813 20:58:25.192327       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0813 20:58:25.200141       1 controllermanager.go:574] Started "bootstrapsigner"
	I0813 20:58:25.200611       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0813 20:58:25.204061       1 controllermanager.go:574] Started "cronjob"
	I0813 20:58:25.204578       1 cronjob_controllerv2.go:125] Starting cronjob controller v2
	I0813 20:58:25.204590       1 shared_informer.go:240] Waiting for caches to sync for cronjob
	I0813 20:58:25.207401       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [68bad432830642a2624a04015efd233270944ea918f0f82217367834481cc3a8] <==
	* I0813 20:57:15.593972       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:57:15.593991       1 disruption.go:371] Sending events to api server.
	I0813 20:57:15.596695       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:57:15.636700       1 shared_informer.go:247] Caches are synced for service account 
	I0813 20:57:15.652896       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:57:15.701400       1 shared_informer.go:247] Caches are synced for taint 
	I0813 20:57:15.701628       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0813 20:57:15.701702       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210813205520-393438. Assuming now as a timestamp.
	I0813 20:57:15.701748       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0813 20:57:15.701825       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0813 20:57:15.702024       1 event.go:291] "Event occurred" object="pause-20210813205520-393438" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210813205520-393438 event: Registered Node pause-20210813205520-393438 in Controller"
	I0813 20:57:15.735577       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:57:15.751667       1 shared_informer.go:247] Caches are synced for stateful set 
	I0813 20:57:15.767285       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:57:15.796364       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0813 20:57:15.847876       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:57:16.199991       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:57:16.200121       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:57:16.224599       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:57:16.277997       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:57:16.457337       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mlf5c"
	I0813 20:57:16.545672       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-fhxw7"
	I0813 20:57:16.596799       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-jzmnb"
	I0813 20:57:16.804186       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:57:16.819742       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-fhxw7"
	
	* 
	* ==> kube-proxy [0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5] <==
	* E0813 20:58:20.334846       1 node.go:161] Failed to retrieve node info: nodes "pause-20210813205520-393438" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	I0813 20:58:21.364522       1 node.go:172] Successfully retrieved node IP: 192.168.61.151
	I0813 20:58:21.365223       1 server_others.go:140] Detected node IP 192.168.61.151
	W0813 20:58:21.366125       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:58:23.461362       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:58:23.462248       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:58:23.465333       1 server_others.go:212] Using iptables Proxier.
	I0813 20:58:23.483125       1 server.go:643] Version: v1.21.3
	I0813 20:58:23.488959       1 config.go:315] Starting service config controller
	I0813 20:58:23.490323       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:58:23.490593       1 config.go:224] Starting endpoint slice config controller
	I0813 20:58:23.490606       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:58:23.512424       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:58:23.514744       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:58:23.591163       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:58:23.593313       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [1bba0d6deb03392a9c2a729aa9c03a18c3e1586cd458a1f081392f4b04d0ae62] <==
	* I0813 20:57:20.123665       1 node.go:172] Successfully retrieved node IP: 192.168.61.151
	I0813 20:57:20.123841       1 server_others.go:140] Detected node IP 192.168.61.151
	W0813 20:57:20.123909       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:57:20.180054       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:57:20.180158       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:57:20.180173       1 server_others.go:212] Using iptables Proxier.
	I0813 20:57:20.181825       1 server.go:643] Version: v1.21.3
	I0813 20:57:20.184367       1 config.go:315] Starting service config controller
	I0813 20:57:20.184561       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:57:20.184600       1 config.go:224] Starting endpoint slice config controller
	I0813 20:57:20.184604       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:57:20.203222       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:57:20.207174       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:57:20.285130       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:57:20.285144       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c] <==
	* I0813 20:58:11.830530       1 serving.go:347] Generated self-signed cert in-memory
	W0813 20:58:20.220887       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 20:58:20.224373       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 20:58:20.224624       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:58:20.224640       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:58:20.341243       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 20:58:20.343223       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:58:20.343608       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:58:20.347257       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0813 20:58:20.444874       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0813 20:59:05.413646       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	* 
	* ==> kube-scheduler [698bbea7ce6e9ce2ff33d763621c6d0ae027c7205d816ea431cafc6e045b6889] <==
	* I0813 20:56:57.340096       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:56:57.373873       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:56:57.375600       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:56:57.398047       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:56:57.406392       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:56:57.418940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.424521       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:56:57.426539       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:56:57.426578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:56:57.428616       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428717       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428765       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:56:57.428811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:56:57.428897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:56:58.261670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:56:58.311937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:56:58.405804       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.463800       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:56:58.585826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.615525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:56:58.626736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.669986       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:56:58.791820       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0813 20:57:01.440271       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:55:52 UTC, end at Fri 2021-08-13 20:59:12 UTC. --
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.453957    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454194    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454410    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454727    2832 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"pause-20210813205520-393438\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20210813205520-393438?timeout=10s\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.454757    2832 kubelet_node_status.go:457] "Unable to update node status" err="update node status exceeds retry count"
	Aug 13 20:58:09 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:09.611414    2832 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-pause-20210813205520-393438.169af943ec02b0a4", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-pause-20210813205520-393438", UID:"36ca0d21ef43020c8f018e62049ff15f", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readines
s probe failed: Get \"https://192.168.61.151:8443/readyz\": dial tcp 192.168.61.151:8443: connect: connection refused", Source:v1.EventSource{Component:"kubelet", Host:"pause-20210813205520-393438"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03dd51755ca4ea4, ext:62717519917, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03dd51755ca4ea4, ext:62717519917, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.61.151:8443: connect: connection refused'(may retry after sleeping)
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.428873    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.429203    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.429890    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.430165    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.430396    2832 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:10.430620    2832 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.430883    2832 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:10 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:10.632976    2832 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:11 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:11.038724    2832 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210813205520-393438?timeout=10s": dial tcp 192.168.61.151:8443: connect: connection refused
	Aug 13 20:58:11 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:11.294567    2832 status_manager.go:566] "Failed to get status for pod" podUID=469cea0375ae276925a50e4dde7e4ace pod="kube-system/kube-scheduler-pause-20210813205520-393438" error="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-20210813205520-393438\": dial tcp 192.168.61.151:8443: connect: connection refused"
	Aug 13 20:58:20 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:20.266431    2832 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Aug 13 20:58:20 pause-20210813205520-393438 kubelet[2832]: E0813 20:58:20.269986    2832 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Aug 13 20:58:25 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:25.541317    2832 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 20:58:25 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:25.590904    2832 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw6vd\" (UniqueName: \"kubernetes.io/projected/99920d7c-bb8d-4c65-bf44-b56f23a40e53-kube-api-access-xw6vd\") pod \"storage-provisioner\" (UID: \"99920d7c-bb8d-4c65-bf44-b56f23a40e53\") "
	Aug 13 20:58:25 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:25.590979    2832 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/99920d7c-bb8d-4c65-bf44-b56f23a40e53-tmp\") pod \"storage-provisioner\" (UID: \"99920d7c-bb8d-4c65-bf44-b56f23a40e53\") "
	Aug 13 20:58:29 pause-20210813205520-393438 kubelet[2832]: I0813 20:58:29.362225    2832 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:58:29 pause-20210813205520-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 20:58:29 pause-20210813205520-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:58:29 pause-20210813205520-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 90 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000328290, 0xc000000003)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000328280)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003f0480, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bcc80, 0x18e5530, 0xc0003284c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0004ceee0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004ceee0, 0x18b3d60, 0xc000311f80, 0x1, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004ceee0, 0x3b9aca00, 0x0, 0x17a0501, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0004ceee0, 0x3b9aca00, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 20:59:06.860332  430249 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/VerifyStatus (19.09s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (11.9s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813205520-393438 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210813205520-393438 --alsologtostderr -v=5: exit status 80 (6.028472955s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210813205520-393438 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:59:15.921046  430471 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:59:15.921213  430471 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:59:15.921242  430471 out.go:311] Setting ErrFile to fd 2...
	I0813 20:59:15.921253  430471 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:59:15.921447  430471 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:59:15.921711  430471 out.go:305] Setting JSON to false
	I0813 20:59:15.921742  430471 mustload.go:65] Loading cluster: pause-20210813205520-393438
	I0813 20:59:15.922106  430471 config.go:177] Loaded profile config "pause-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:59:15.922604  430471 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:15.922654  430471 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:15.939440  430471 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41217
	I0813 20:59:15.939967  430471 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:15.940636  430471 main.go:130] libmachine: Using API Version  1
	I0813 20:59:15.940662  430471 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:15.941150  430471 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:15.941333  430471 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetState
	I0813 20:59:15.945745  430471 host.go:66] Checking if "pause-20210813205520-393438" exists ...
	I0813 20:59:15.946205  430471 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:15.946245  430471 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:15.962767  430471 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40319
	I0813 20:59:15.966767  430471 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:15.967390  430471 main.go:130] libmachine: Using API Version  1
	I0813 20:59:15.967412  430471 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:15.967878  430471 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:15.968070  430471 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:59:15.968867  430471 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210813205520-393438 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 20:59:15.972744  430471 out.go:177] * Pausing node pause-20210813205520-393438 ... 
	I0813 20:59:15.972777  430471 host.go:66] Checking if "pause-20210813205520-393438" exists ...
	I0813 20:59:15.973334  430471 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:15.973372  430471 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:15.990765  430471 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35621
	I0813 20:59:15.993328  430471 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:15.994029  430471 main.go:130] libmachine: Using API Version  1
	I0813 20:59:15.994054  430471 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:15.994739  430471 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:15.994928  430471 main.go:130] libmachine: (pause-20210813205520-393438) Calling .DriverName
	I0813 20:59:15.995152  430471 ssh_runner.go:149] Run: systemctl --version
	I0813 20:59:15.995204  430471 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:59:16.004157  430471 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:59:16.004762  430471 main.go:130] libmachine: (pause-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:e2:3d", ip: ""} in network mk-pause-20210813205520-393438: {Iface:virbr3 ExpiryTime:2021-08-13 21:55:55 +0000 UTC Type:0 Mac:52:54:00:52:e2:3d Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:pause-20210813205520-393438 Clientid:01:52:54:00:52:e2:3d}
	I0813 20:59:16.004807  430471 main.go:130] libmachine: (pause-20210813205520-393438) DBG | domain pause-20210813205520-393438 has defined IP address 192.168.61.151 and MAC address 52:54:00:52:e2:3d in network mk-pause-20210813205520-393438
	I0813 20:59:16.005015  430471 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHPort
	I0813 20:59:16.010766  430471 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:59:16.010953  430471 main.go:130] libmachine: (pause-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:59:16.011226  430471 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/pause-20210813205520-393438/id_rsa Username:docker}
	I0813 20:59:16.158997  430471 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:59:16.171390  430471 pause.go:50] kubelet running: true
	I0813 20:59:16.171440  430471 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 20:59:21.508823  430471 ssh_runner.go:189] Completed: sudo systemctl disable --now kubelet: (5.337357916s)
	I0813 20:59:21.508892  430471 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 20:59:21.508960  430471 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 20:59:21.733676  430471 cri.go:76] found id: "33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81"
	I0813 20:59:21.733708  430471 cri.go:76] found id: "b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d"
	I0813 20:59:21.733715  430471 cri.go:76] found id: "afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5"
	I0813 20:59:21.733721  430471 cri.go:76] found id: "57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849"
	I0813 20:59:21.733727  430471 cri.go:76] found id: "1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c"
	I0813 20:59:21.733732  430471 cri.go:76] found id: "0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5"
	I0813 20:59:21.733737  430471 cri.go:76] found id: "1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c"
	I0813 20:59:21.733743  430471 cri.go:76] found id: "1bba0d6deb03392a9c2a729aa9c03a18c3e1586cd458a1f081392f4b04d0ae62"
	I0813 20:59:21.733748  430471 cri.go:76] found id: "63c0cc1fc4c0cb78fac8fe29e80eed8b43fa6762ce189d85564911aed6114ba0"
	I0813 20:59:21.733757  430471 cri.go:76] found id: "698bbea7ce6e9ce2ff33d763621c6d0ae027c7205d816ea431cafc6e045b6889"
	I0813 20:59:21.733768  430471 cri.go:76] found id: "df02c38abac90e1bfb1eaa8433ba9faac330d654e786d0c41901507b55d0c418"
	I0813 20:59:21.733776  430471 cri.go:76] found id: "68bad432830642a2624a04015efd233270944ea918f0f82217367834481cc3a8"
	I0813 20:59:21.733782  430471 cri.go:76] found id: "11c2753c9a8a79ebfb2fe156a698be51aed9e9d6ac5dfc0af27d0a4822c7d016"
	I0813 20:59:21.733787  430471 cri.go:76] found id: ""
	I0813 20:59:21.733852  430471 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 20:59:21.791197  430471 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5","pid":4658,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5/rootfs","created":"2021-08-13T20:58:12.412888441Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c","pid":4590,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c/rootfs","created":"2021-08-13T20:58:11.057580039Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c","pid":4542,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c/rootfs","created":"2021-08-13T20:58:10.472076836Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872"},"owner":"root"
},{"ociVersion":"1.0.2-dev","id":"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","pid":4374,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf/rootfs","created":"2021-08-13T20:58:09.064644692Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210813205520-393438_86a000e5c08d32d80b2fd4e89cd34dd1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94","pid":4269,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94","rootfs":"/run/con
tainerd/io.containerd.runtime.v2.task/k8s.io/47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94/rootfs","created":"2021-08-13T20:58:08.848304205Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-mlf5c_c0812228-e936-4bfa-9fbb-a4d0707f2a63"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872","pid":4244,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872/rootfs","created":"2021-08-13T20:58:08.637074413Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"53f314c6cf963d0b7a2ce2addc78d39af197
7ebeeb0041cf9eb5208c13771872","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210813205520-393438_469cea0375ae276925a50e4dde7e4ace"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849","pid":4624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849/rootfs","created":"2021-08-13T20:58:11.449040242Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a","pid":4909,"status":"running","bundle":"/run/containerd/io.
containerd.runtime.v2.task/k8s.io/76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a/rootfs","created":"2021-08-13T20:58:26.026296621Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_99920d7c-bb8d-4c65-bf44-b56f23a40e53"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","pid":4366,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22/rootfs","created":"2021-08-13T20:58:09.044079666Z","annota
tions":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210813205520-393438_36ca0d21ef43020c8f018e62049ff15f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5","pid":4682,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5/rootfs","created":"2021-08-13T20:58:11.953431943Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b6372d9d7648658f7077421
ada6d80fd2a27141edbbd6ee51d78346ef736205d","pid":4701,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d/rootfs","created":"2021-08-13T20:58:12.144003819Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","pid":4318,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1/rootfs","created":"2021-08-13T20:58:
08.900909328Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210813205520-393438_81d9f8c777d9fb26ff8b7d9c93d26d5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682","pid":4486,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682/rootfs","created":"2021-08-13T20:58:09.821697744Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5
db-jzmnb_ea00ae4c-f4d9-414c-8762-6314a96c8a06"},"owner":"root"}]
	I0813 20:59:21.791474  430471 cri.go:113] list returned 13 containers
	I0813 20:59:21.791489  430471 cri.go:116] container: {ID:0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5 Status:running}
	I0813 20:59:21.791518  430471 cri.go:116] container: {ID:1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c Status:running}
	I0813 20:59:21.791525  430471 cri.go:116] container: {ID:1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c Status:running}
	I0813 20:59:21.791531  430471 cri.go:116] container: {ID:3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf Status:running}
	I0813 20:59:21.791538  430471 cri.go:118] skipping 3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf - not in ps
	I0813 20:59:21.791547  430471 cri.go:116] container: {ID:47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94 Status:running}
	I0813 20:59:21.791554  430471 cri.go:118] skipping 47e050012dbca19a38705743976e702aa5815af3e39eaebbfe81753ef825ae94 - not in ps
	I0813 20:59:21.791572  430471 cri.go:116] container: {ID:53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872 Status:running}
	I0813 20:59:21.791580  430471 cri.go:118] skipping 53f314c6cf963d0b7a2ce2addc78d39af1977ebeeb0041cf9eb5208c13771872 - not in ps
	I0813 20:59:21.791586  430471 cri.go:116] container: {ID:57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849 Status:running}
	I0813 20:59:21.791592  430471 cri.go:116] container: {ID:76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a Status:running}
	I0813 20:59:21.791598  430471 cri.go:118] skipping 76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a - not in ps
	I0813 20:59:21.791603  430471 cri.go:116] container: {ID:a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22 Status:running}
	I0813 20:59:21.791610  430471 cri.go:118] skipping a655f217cf1c593801e3e12b8f146d58659a68597b4a75eb09c282cdb37a9f22 - not in ps
	I0813 20:59:21.791615  430471 cri.go:116] container: {ID:afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5 Status:running}
	I0813 20:59:21.791621  430471 cri.go:116] container: {ID:b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d Status:running}
	I0813 20:59:21.791626  430471 cri.go:116] container: {ID:ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1 Status:running}
	I0813 20:59:21.791632  430471 cri.go:118] skipping ce1823a3db17ab7c022320520c4d6f3883120956070d204162dc421dc44b43c1 - not in ps
	I0813 20:59:21.791637  430471 cri.go:116] container: {ID:cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682 Status:running}
	I0813 20:59:21.791643  430471 cri.go:118] skipping cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682 - not in ps
	I0813 20:59:21.791694  430471 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5
	I0813 20:59:21.820803  430471 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5 1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c
	I0813 20:59:21.858657  430471 out.go:177] 
	W0813 20:59:21.858832  430471 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5 1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:59:21Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5 1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:59:21Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 20:59:21.858876  430471 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 20:59:21.863623  430471 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 20:59:21.865248  430471 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210813205520-393438 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813205520-393438 -n pause-20210813205520-393438
E0813 20:59:22.129848  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813205520-393438 -n pause-20210813205520-393438: exit status 2 (349.240434ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813205520-393438 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20210813205520-393438 logs -n 25: (1.824357713s)
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210813202658-393438          | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:40:58 UTC | Fri, 13 Aug 2021 20:40:59 UTC |
	|         | node delete m03                          |                                          |         |         |                               |                               |
	| -p      | multinode-20210813202658-393438          | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:41:00 UTC | Fri, 13 Aug 2021 20:44:04 UTC |
	|         | stop                                     |                                          |         |         |                               |                               |
	| start   | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:04 UTC | Fri, 13 Aug 2021 20:48:01 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	|         | --wait=true -v=8                         |                                          |         |         |                               |                               |
	|         | --alsologtostderr --driver=kvm2          |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| start   | -p                                       | multinode-20210813202658-393438-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:01 UTC | Fri, 13 Aug 2021 20:49:01 UTC |
	|         | multinode-20210813202658-393438-m03      |                                          |         |         |                               |                               |
	|         | --driver=kvm2                            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | multinode-20210813202658-393438-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:02 UTC | Fri, 13 Aug 2021 20:49:03 UTC |
	|         | multinode-20210813202658-393438-m03      |                                          |         |         |                               |                               |
	| delete  | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:03 UTC | Fri, 13 Aug 2021 20:49:05 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:38 UTC | Fri, 13 Aug 2021 20:52:46 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | --wait=true --preload=false              |                                          |         |         |                               |                               |
	|         | --driver=kvm2                            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:47 UTC | Fri, 13 Aug 2021 20:52:48 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | -- sudo crictl pull busybox              |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:48 UTC | Fri, 13 Aug 2021 20:53:39 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2           |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:39 UTC | Fri, 13 Aug 2021 20:53:39 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | -- sudo crictl image ls                  |                                          |         |         |                               |                               |
	| delete  | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:39 UTC | Fri, 13 Aug 2021 20:53:41 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:41 UTC | Fri, 13 Aug 2021 20:54:41 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:42 UTC | Fri, 13 Aug 2021 20:54:42 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --cancel-scheduled                       |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:55 UTC | Fri, 13 Aug 2021 20:55:02 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --schedule 5s                            |                                          |         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:20 UTC | Fri, 13 Aug 2021 20:55:20 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-containerd-20210813205520-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:21 UTC | Fri, 13 Aug 2021 20:57:33 UTC |
	|         | offline-containerd-20210813205520-393438 |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=kvm2                |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20210813205520-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:33 UTC | Fri, 13 Aug 2021 20:57:35 UTC |
	|         | offline-containerd-20210813205520-393438 |                                          |         |         |                               |                               |
	| start   | -p pause-20210813205520-393438           | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:21 UTC | Fri, 13 Aug 2021 20:57:54 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=kvm2                 |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p pause-20210813205520-393438           | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:54 UTC | Fri, 13 Aug 2021 20:58:28 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                       |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:27 UTC | Fri, 13 Aug 2021 20:58:34 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                       |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| logs    | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:34 UTC | Fri, 13 Aug 2021 20:58:35 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	| delete  | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:35 UTC | Fri, 13 Aug 2021 20:58:36 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210813205735-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:35 UTC | Fri, 13 Aug 2021 20:58:58 UTC |
	|         | kubernetes-upgrade-20210813205735-393438 |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0             |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210813205735-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:58 UTC | Fri, 13 Aug 2021 20:59:00 UTC |
	|         | kubernetes-upgrade-20210813205735-393438 |                                          |         |         |                               |                               |
	| unpause | -p pause-20210813205520-393438           | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:14 UTC | Fri, 13 Aug 2021 20:59:15 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:59:01
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:59:01.126131  430322 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:59:01.126334  430322 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:59:01.126345  430322 out.go:311] Setting ErrFile to fd 2...
	I0813 20:59:01.126349  430322 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:59:01.126438  430322 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:59:01.126658  430322 out.go:305] Setting JSON to false
	I0813 20:59:01.168018  430322 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6103,"bootTime":1628882238,"procs":194,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:59:01.168125  430322 start.go:121] virtualization: kvm guest
	I0813 20:59:01.170881  430322 out.go:177] * [kubernetes-upgrade-20210813205735-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:59:01.172410  430322 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:59:01.171037  430322 notify.go:169] Checking for updates...
	I0813 20:59:01.173939  430322 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:59:01.175272  430322 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:59:01.176563  430322 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:59:01.176998  430322 config.go:177] Loaded profile config "kubernetes-upgrade-20210813205735-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:59:01.177535  430322 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:01.177578  430322 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:01.190824  430322 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42145
	I0813 20:59:01.191328  430322 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:01.192096  430322 main.go:130] libmachine: Using API Version  1
	I0813 20:59:01.192126  430322 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:01.192607  430322 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:01.192838  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:01.193041  430322 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:59:01.193517  430322 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:01.193562  430322 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:01.208017  430322 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37221
	I0813 20:59:01.209032  430322 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:01.209672  430322 main.go:130] libmachine: Using API Version  1
	I0813 20:59:01.209701  430322 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:01.210226  430322 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:01.210435  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:01.248561  430322 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:59:01.248588  430322 start.go:278] selected driver: kvm2
	I0813 20:59:01.248595  430322 start.go:751] validating driver "kvm2" against &{Name:kubernetes-upgrade-20210813205735-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210813205735-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:59:01.248717  430322 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:59:01.250011  430322 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:59:01.250160  430322 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:59:01.262799  430322 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:59:01.263220  430322 cni.go:93] Creating CNI manager for ""
	I0813 20:59:01.263239  430322 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:59:01.263251  430322 start_flags.go:277] config:
	{Name:kubernetes-upgrade-20210813205735-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-2021081320573
5-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:59:01.263415  430322 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:59:01.265573  430322 out.go:177] * Starting control plane node kubernetes-upgrade-20210813205735-393438 in cluster kubernetes-upgrade-20210813205735-393438
	I0813 20:59:01.265601  430322 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:59:01.265646  430322 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:59:01.265665  430322 cache.go:56] Caching tarball of preloaded images
	I0813 20:59:01.265770  430322 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:59:01.265790  430322 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0813 20:59:01.265928  430322 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/config.json ...
	I0813 20:59:01.266108  430322 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:59:01.266142  430322 start.go:313] acquiring machines lock for kubernetes-upgrade-20210813205735-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:59:01.266204  430322 start.go:317] acquired machines lock for "kubernetes-upgrade-20210813205735-393438" in 45.243µs
	I0813 20:59:01.266222  430322 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:59:01.266233  430322 fix.go:55] fixHost starting: 
	I0813 20:59:01.266656  430322 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:01.266753  430322 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:01.279307  430322 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45157
	I0813 20:59:01.279807  430322 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:01.280359  430322 main.go:130] libmachine: Using API Version  1
	I0813 20:59:01.280380  430322 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:01.280834  430322 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:01.281039  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:01.281186  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetState
	I0813 20:59:01.284783  430322 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210813205735-393438: state=Stopped err=<nil>
	I0813 20:59:01.284823  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	W0813 20:59:01.284949  430322 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:58:59.870373  429844 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.008983457s)
	I0813 20:58:59.871116  429844 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 20:58:59.871287  429844 ssh_runner.go:149] Run: which lz4
	I0813 20:58:59.880284  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0813 20:58:59.880368  429844 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:58:59.885231  429844 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:58:59.885260  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0813 20:59:00.541957  429197 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:59:00.550528  429197 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:59:00.550578  429197 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:59:00.558098  429197 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:59:00.564879  429197 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:59:00.564943  429197 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:59:00.573424  429197 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:59:00.581588  429197 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:59:00.581608  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:59:00.775307  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:59:02.073222  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.297863235s)
	I0813 20:59:02.073255  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:59:02.470011  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:59:02.646948  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:59:02.826557  429197 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:59:02.826636  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:03.341139  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:03.840915  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:04.340501  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:04.841157  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:05.341002  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:01.287021  430322 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-20210813205735-393438" ...
	I0813 20:59:01.287050  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .Start
	I0813 20:59:01.287193  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Ensuring networks are active...
	I0813 20:59:01.289470  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Ensuring network default is active
	I0813 20:59:01.289937  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Ensuring network mk-kubernetes-upgrade-20210813205735-393438 is active
	I0813 20:59:01.290387  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Getting domain xml...
	I0813 20:59:01.292866  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Creating domain...
	I0813 20:59:01.738864  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Waiting to get IP...
	I0813 20:59:01.739927  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:01.740403  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has current primary IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:01.740436  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Found IP for machine: 192.168.39.75
	I0813 20:59:01.740459  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Reserving static IP address...
	I0813 20:59:01.741000  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-20210813205735-393438", mac: "52:54:00:50:ef:93", ip: "192.168.39.75"} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:01.741035  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | skip adding static IP to network mk-kubernetes-upgrade-20210813205735-393438 - found existing host DHCP lease matching {name: "kubernetes-upgrade-20210813205735-393438", mac: "52:54:00:50:ef:93", ip: "192.168.39.75"}
	I0813 20:59:01.741051  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Reserved static IP address: 192.168.39.75
	I0813 20:59:01.741072  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Waiting for SSH to be available...
	I0813 20:59:01.741092  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | Getting to WaitForSSH function...
	I0813 20:59:01.747891  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:01.748311  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:01.748341  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:01.748794  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | Using SSH client type: external
	I0813 20:59:01.748825  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa (-rw-------)
	I0813 20:59:01.748867  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:59:01.748881  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | About to run SSH command:
	I0813 20:59:01.748893  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | exit 0
	I0813 20:59:04.251439  429844 containerd.go:546] Took 4.371086 seconds to copy over tarball
	I0813 20:59:04.251530  429844 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:59:05.840828  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:06.340641  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:06.840589  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:07.340738  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:07.840607  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:08.340740  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:08.841423  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:09.341300  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:09.840616  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:10.340878  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:10.840590  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:11.340551  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:11.840776  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:12.341416  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:12.841143  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:13.340641  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:13.840699  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:14.340891  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:14.841272  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:15.340868  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:15.367894  429197 api_server.go:70] duration metric: took 12.541339791s to wait for apiserver process to appear ...
	I0813 20:59:15.367920  429197 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:59:15.367932  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:15.369463  429197 api_server.go:255] stopped: https://192.168.72.177:8443/healthz: Get "https://192.168.72.177:8443/healthz": dial tcp 192.168.72.177:8443: connect: connection refused
	I0813 20:59:14.967460  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 20:59:14.967832  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetConfigRaw
	I0813 20:59:14.968579  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetIP
	I0813 20:59:14.975946  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:14.976590  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:14.976617  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:14.977080  430322 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/config.json ...
	I0813 20:59:14.977327  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:14.977519  430322 machine.go:88] provisioning docker machine ...
	I0813 20:59:14.977551  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:14.977744  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetMachineName
	I0813 20:59:14.977932  430322 buildroot.go:166] provisioning hostname "kubernetes-upgrade-20210813205735-393438"
	I0813 20:59:14.977959  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetMachineName
	I0813 20:59:14.978140  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:14.984614  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:14.984978  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:14.985004  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:14.985249  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:14.985456  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:14.985630  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:14.985808  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:14.986008  430322 main.go:130] libmachine: Using SSH client type: native
	I0813 20:59:14.986206  430322 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0813 20:59:14.986228  430322 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210813205735-393438 && echo "kubernetes-upgrade-20210813205735-393438" | sudo tee /etc/hostname
	I0813 20:59:15.154609  430322 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210813205735-393438
	
	I0813 20:59:15.154644  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:15.161683  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.162112  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:15.162145  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.162482  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:15.162710  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:15.162936  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:15.163108  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:15.163323  430322 main.go:130] libmachine: Using SSH client type: native
	I0813 20:59:15.163532  430322 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0813 20:59:15.163559  430322 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210813205735-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210813205735-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210813205735-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:59:15.322283  430322 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:59:15.322316  430322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:59:15.322360  430322 buildroot.go:174] setting up certificates
	I0813 20:59:15.322375  430322 provision.go:83] configureAuth start
	I0813 20:59:15.322388  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetMachineName
	I0813 20:59:15.322753  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetIP
	I0813 20:59:15.329092  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.329518  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:15.329541  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.330028  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:15.335895  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.336319  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:15.336343  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.336720  430322 provision.go:138] copyHostCerts
	I0813 20:59:15.336809  430322 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:59:15.336821  430322 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:59:15.336872  430322 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:59:15.336988  430322 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:59:15.336997  430322 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:59:15.337026  430322 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:59:15.337088  430322 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:59:15.337150  430322 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:59:15.337183  430322 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:59:15.337294  430322 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210813205735-393438 san=[192.168.39.75 192.168.39.75 localhost 127.0.0.1 minikube kubernetes-upgrade-20210813205735-393438]
	I0813 20:59:15.804742  430322 provision.go:172] copyRemoteCerts
	I0813 20:59:15.804803  430322 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:59:15.804839  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:15.810630  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.811009  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:15.811045  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.811311  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:15.811513  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:15.811677  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:15.811829  430322 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:59:15.907977  430322 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:59:15.926182  430322 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0813 20:59:15.969182  430322 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:59:15.994398  430322 provision.go:86] duration metric: configureAuth took 672.012293ms
	I0813 20:59:15.994426  430322 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:59:15.994619  430322 config.go:177] Loaded profile config "kubernetes-upgrade-20210813205735-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:59:15.994636  430322 machine.go:91] provisioned docker machine in 1.017093923s
	I0813 20:59:15.994647  430322 start.go:267] post-start starting for "kubernetes-upgrade-20210813205735-393438" (driver="kvm2")
	I0813 20:59:15.994656  430322 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:59:15.994706  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:15.998921  430322 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:59:15.998955  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:16.005687  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.006139  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:16.006166  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.006764  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:16.006965  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:16.007139  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:16.007310  430322 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:59:14.775039  429844 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.523480791s)
	I0813 20:59:14.775063  429844 containerd.go:553] Took 10.523589 seconds t extract the tarball
	I0813 20:59:14.775075  429844 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 20:59:14.847701  429844 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:59:15.046932  429844 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:59:15.115651  429844 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:59:15.156721  429844 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:59:15.176516  429844 docker.go:153] disabling docker service ...
	I0813 20:59:15.176575  429844 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:59:15.195373  429844 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:59:15.208929  429844 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:59:15.423095  429844 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:59:15.651161  429844 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:59:15.667410  429844 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:59:15.686989  429844 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSB0cnVlCgogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBbcGx1Z2lucy5jcmkuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0LmQiC
iAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:59:15.707356  429844 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:59:15.716851  429844 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:59:15.716901  429844 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:59:15.737623  429844 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:59:15.746974  429844 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:59:15.934895  429844 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:59:16.002845  429844 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:59:16.002896  429844 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:59:16.010688  429844 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 20:59:17.115625  429844 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:59:17.122401  429844 start.go:413] Will wait 60s for crictl version
	I0813 20:59:17.122460  429844 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:59:17.161733  429844 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:59:17.161805  429844 ssh_runner.go:149] Run: containerd --version
	I0813 20:59:17.195014  429844 ssh_runner.go:149] Run: containerd --version
	I0813 20:59:17.227710  429844 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:59:17.227794  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .GetIP
	I0813 20:59:17.233294  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:59:17.233714  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fb:29", ip: ""} in network mk-force-systemd-env-20210813205836-393438: {Iface:virbr5 ExpiryTime:2021-08-13 21:58:53 +0000 UTC Type:0 Mac:52:54:00:ec:fb:29 Iaid: IPaddr:192.168.83.204 Prefix:24 Hostname:force-systemd-env-20210813205836-393438 Clientid:01:52:54:00:ec:fb:29}
	I0813 20:59:17.233767  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined IP address 192.168.83.204 and MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:59:17.233945  429844 ssh_runner.go:149] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0813 20:59:17.238972  429844 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:59:17.252336  429844 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:59:17.252395  429844 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:59:17.286326  429844 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:59:17.286348  429844 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:59:17.286402  429844 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:59:17.318852  429844 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:59:17.318878  429844 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:59:17.318944  429844 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:59:17.350917  429844 cni.go:93] Creating CNI manager for ""
	I0813 20:59:17.350945  429844 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:59:17.350958  429844 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:59:17.350973  429844 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.204 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-20210813205836-393438 NodeName:force-systemd-env-20210813205836-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.83.204 CgroupD
river:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:59:17.351127  429844 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "force-systemd-env-20210813205836-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:59:17.351246  429844 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=force-systemd-env-20210813205836-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.204 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210813205836-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:59:17.351312  429844 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:59:17.359902  429844 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:59:17.359971  429844 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:59:17.366944  429844 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (555 bytes)
	I0813 20:59:17.379424  429844 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:59:17.391585  429844 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
	I0813 20:59:17.406745  429844 ssh_runner.go:149] Run: grep 192.168.83.204	control-plane.minikube.internal$ /etc/hosts
	I0813 20:59:17.411069  429844 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:59:17.422330  429844 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438 for IP: 192.168.83.204
	I0813 20:59:17.422387  429844 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:59:17.422411  429844 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:59:17.422464  429844 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.key
	I0813 20:59:17.422475  429844 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.crt with IP's: []
	I0813 20:59:17.589717  429844 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.crt ...
	I0813 20:59:17.589751  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.crt: {Name:mk6b638656acdae073d352761d68fbce2d483a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.589968  429844 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.key ...
	I0813 20:59:17.589989  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.key: {Name:mk9ac2f2539726279887f5743a365e91b105c985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.590095  429844 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key.34cc45b1
	I0813 20:59:17.590108  429844 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt.34cc45b1 with IP's: [192.168.83.204 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:59:17.670870  429844 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt.34cc45b1 ...
	I0813 20:59:17.670902  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt.34cc45b1: {Name:mk74913ff00a65a919a90225ed0787c2cfa299a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.671104  429844 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key.34cc45b1 ...
	I0813 20:59:17.671122  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key.34cc45b1: {Name:mk4e832137b5b0e86e874baf2ff0ef565095edcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.671224  429844 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt.34cc45b1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt
	I0813 20:59:17.671292  429844 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key.34cc45b1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key
	I0813 20:59:17.671342  429844 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.key
	I0813 20:59:17.671350  429844 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.crt with IP's: []
	I0813 20:59:17.811967  429844 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.crt ...
	I0813 20:59:17.811999  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.crt: {Name:mka2285c334ce140bfabe8b380f8ad699fb95705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.812195  429844 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.key ...
	I0813 20:59:17.812213  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.key: {Name:mkb6682fffa6b297b1d5d051bc98ddc5e21e1737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.812326  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0813 20:59:17.812347  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0813 20:59:17.812362  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0813 20:59:17.812374  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0813 20:59:17.812388  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 20:59:17.812401  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 20:59:17.812413  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 20:59:17.812425  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 20:59:17.812864  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 20:59:17.812941  429844 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 20:59:17.812954  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:59:17.812992  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:59:17.813034  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:59:17.813070  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:59:17.813140  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 20:59:17.813182  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem -> /usr/share/ca-certificates/393438.pem
	I0813 20:59:17.813203  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> /usr/share/ca-certificates/3934382.pem
	I0813 20:59:17.813220  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:59:17.815185  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:59:17.835646  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:59:17.854490  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:59:17.870584  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:59:17.888194  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:59:17.904940  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:59:17.923217  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:59:17.939714  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 20:59:17.956070  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 20:59:17.973489  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 20:59:17.990533  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:59:18.006640  429844 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:59:18.018556  429844 ssh_runner.go:149] Run: openssl version
	I0813 20:59:18.026398  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 20:59:18.036506  429844 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 20:59:18.043248  429844 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 20:59:18.043295  429844 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 20:59:18.051624  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 20:59:18.060163  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 20:59:18.068077  429844 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 20:59:18.073070  429844 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 20:59:18.073115  429844 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 20:59:18.079189  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:59:18.086616  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:59:18.094245  429844 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:59:18.098831  429844 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:59:18.098874  429844 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:59:18.104647  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:59:18.112378  429844 kubeadm.go:390] StartCluster: {Name:force-systemd-env-20210813205836-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.21.3 ClusterName:force-systemd-env-20210813205836-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.204 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:59:18.112457  429844 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:59:18.112500  429844 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:59:18.152531  429844 cri.go:76] found id: ""
	I0813 20:59:18.152584  429844 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:59:18.160520  429844 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:59:18.167754  429844 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:59:18.174622  429844 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:59:18.174683  429844 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 20:59:15.870106  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:19.916324  429197 api_server.go:265] https://192.168.72.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:59:19.916355  429197 api_server.go:101] status: https://192.168.72.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:59:20.369815  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:20.377224  429197 api_server.go:265] https://192.168.72.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:59:20.377246  429197 api_server.go:101] status: https://192.168.72.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:59:16.101618  430322 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:59:16.107180  430322 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:59:16.107204  430322 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:59:16.107264  430322 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:59:16.107365  430322 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 20:59:16.107482  430322 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:59:16.116337  430322 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 20:59:16.137631  430322 start.go:270] post-start completed in 142.968259ms
	I0813 20:59:16.137673  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:16.137960  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:16.143893  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.144357  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:16.144393  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.144525  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:16.144729  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:16.144879  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:16.145013  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:16.145162  430322 main.go:130] libmachine: Using SSH client type: native
	I0813 20:59:16.145340  430322 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0813 20:59:16.145356  430322 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 20:59:16.273390  430322 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888356.165572851
	
	I0813 20:59:16.273423  430322 fix.go:212] guest clock: 1628888356.165572851
	I0813 20:59:16.273435  430322 fix.go:225] Guest: 2021-08-13 20:59:16.165572851 +0000 UTC Remote: 2021-08-13 20:59:16.137936029 +0000 UTC m=+15.073653376 (delta=27.636822ms)
	I0813 20:59:16.273466  430322 fix.go:196] guest clock delta is within tolerance: 27.636822ms
	I0813 20:59:16.273484  430322 fix.go:57] fixHost completed within 15.007250096s
	I0813 20:59:16.273495  430322 start.go:80] releasing machines lock for "kubernetes-upgrade-20210813205735-393438", held for 15.007279625s
	I0813 20:59:16.273547  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:16.273864  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetIP
	I0813 20:59:16.280481  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.280887  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:16.280930  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.281057  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:16.281265  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:16.281833  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:16.282114  430322 ssh_runner.go:149] Run: systemctl --version
	I0813 20:59:16.282129  430322 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:59:16.282147  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:16.282183  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:16.289563  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.290185  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.290473  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:16.290560  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.290775  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:16.290971  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:16.291085  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:16.291206  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:16.291246  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.291408  430322 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:59:16.291516  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:16.291696  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:16.291840  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:16.291971  430322 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:59:16.377402  430322 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:59:16.377543  430322 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:59:20.423651  430322 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.04608471s)
	I0813 20:59:20.423837  430322 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 20:59:20.423909  430322 ssh_runner.go:149] Run: which lz4
	I0813 20:59:20.428467  430322 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:59:20.433162  430322 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:59:20.433188  430322 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (945588089 bytes)
	I0813 20:59:18.857423  429844 out.go:204]   - Generating certificates and keys ...
	I0813 20:59:21.576816  429844 out.go:204]   - Booting up control plane ...
	I0813 20:59:20.869652  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:21.055874  429197 api_server.go:265] https://192.168.72.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:59:21.055917  429197 api_server.go:101] status: https://192.168.72.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:59:21.370125  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:21.389753  429197 api_server.go:265] https://192.168.72.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:59:21.389788  429197 api_server.go:101] status: https://192.168.72.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:59:21.874753  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:21.890465  429197 api_server.go:265] https://192.168.72.177:8443/healthz returned 200:
	ok
	I0813 20:59:21.902570  429197 api_server.go:139] control plane version: v1.20.0
	I0813 20:59:21.902598  429197 api_server.go:129] duration metric: took 6.534669952s to wait for apiserver health ...
	I0813 20:59:21.902612  429197 cni.go:93] Creating CNI manager for ""
	I0813 20:59:21.902621  429197 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	33fae69af6bcf       6e38f40d628db       56 seconds ago       Exited              storage-provisioner       0                   76aee79f917be
	b6372d9d76486       296a6d5035e2d       About a minute ago   Running             coredns                   1                   cfc4c8785e479
	afabb5f130410       0369cf4303ffd       About a minute ago   Running             etcd                      1                   3f41ec729ef71
	57f3f32f280d8       bc2bb319a7038       About a minute ago   Running             kube-controller-manager   1                   ce1823a3db17a
	1053b5b4ba3ab       3d174f00aa39e       About a minute ago   Running             kube-apiserver            1                   a655f217cf1c5
	0d1a942c8b8c2       adb2816ea823a       About a minute ago   Running             kube-proxy                1                   47e050012dbca
	1d84b053549cf       6be0dc1302e30       About a minute ago   Running             kube-scheduler            1                   53f314c6cf963
	1bba0d6deb033       adb2816ea823a       2 minutes ago        Exited              kube-proxy                0                   3f6f239c2851f
	63c0cc1fc4c0c       296a6d5035e2d       2 minutes ago        Exited              coredns                   0                   b1f1f31f28005
	698bbea7ce6e9       6be0dc1302e30       2 minutes ago        Exited              kube-scheduler            0                   5a66336a35add
	df02c38abac90       0369cf4303ffd       2 minutes ago        Exited              etcd                      0                   4cf745987f602
	68bad43283064       bc2bb319a7038       2 minutes ago        Exited              kube-controller-manager   0                   5340b4aa5ca39
	11c2753c9a8a7       3d174f00aa39e       2 minutes ago        Exited              kube-apiserver            0                   304b611d719ea
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:55:52 UTC, end at Fri 2021-08-13 20:59:23 UTC. --
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.078142311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-pause-20210813205520-393438,Uid:86a000e5c08d32d80b2fd4e89cd34dd1,Namespace:kube-system,Attempt:1,} returns sandbox id \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.145266794Z" level=info msg="CreateContainer within sandbox \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\" for container &ContainerMetadata{Name:etcd,Attempt:1,}"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.321521915Z" level=info msg="StartContainer for \"1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c\" returns successfully"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.349622186Z" level=info msg="CreateContainer within sandbox \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\" for &ContainerMetadata{Name:etcd,Attempt:1,} returns container id \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.353268082Z" level=info msg="StartContainer for \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.376810925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-jzmnb,Uid:ea00ae4c-f4d9-414c-8762-6314a96c8a06,Namespace:kube-system,Attempt:1,} returns sandbox id \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.451595226Z" level=info msg="CreateContainer within sandbox \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.633919582Z" level=info msg="CreateContainer within sandbox \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.635324605Z" level=info msg="StartContainer for \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.770314446Z" level=info msg="StartContainer for \"57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849\" returns successfully"
	Aug 13 20:58:12 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:12.016041628Z" level=info msg="StartContainer for \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\" returns successfully"
	Aug 13 20:58:12 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:12.229109322Z" level=info msg="StartContainer for \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\" returns successfully"
	Aug 13 20:58:15 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:15.472167045Z" level=info msg="StartContainer for \"0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5\" returns successfully"
	Aug 13 20:58:25 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:25.856093567Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:99920d7c-bb8d-4c65-bf44-b56f23a40e53,Namespace:kube-system,Attempt:0,}"
	Aug 13 20:58:25 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:25.901091488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a pid=4886
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.481756294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:99920d7c-bb8d-4c65-bf44-b56f23a40e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.492027606Z" level=info msg="CreateContainer within sandbox \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.607213854Z" level=info msg="CreateContainer within sandbox \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.614295374Z" level=info msg="StartContainer for \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.876068804Z" level=info msg="StartContainer for \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\" returns successfully"
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.156236073Z" level=info msg="Finish piping stderr of container \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.158102368Z" level=info msg="Finish piping stdout of container \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.159567062Z" level=info msg="TaskExit event &TaskExit{ContainerID:33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81,ID:33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81,Pid:4945,ExitStatus:255,ExitedAt:2021-08-13 20:58:41.157732657 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.217770540Z" level=info msg="shim disconnected" id=33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.217941244Z" level=error msg="copy shim log" error="read /proc/self/fd/98: file already closed"
	
	* 
	* ==> coredns [63c0cc1fc4c0cb78fac8fe29e80eed8b43fa6762ce189d85564911aed6114ba0] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 6b95276539722f40f4545af91578505c
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0813 20:57:49.980199       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.978) (total time: 30001ms):
	Trace[2019727887]: [30.001847928s] [30.001847928s] END
	E0813 20:57:49.980279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:57:49.980655       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.975) (total time: 30005ms):
	Trace[939984059]: [30.00501838s] [30.00501838s] END
	E0813 20:57:49.980691       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:57:49.981307       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.975) (total time: 30005ms):
	Trace[911902081]: [30.005916603s] [30.005916603s] END
	E0813 20:57:49.981521       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 6b95276539722f40f4545af91578505c
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	E0813 20:58:20.310855       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210813205520-393438
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210813205520-393438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=pause-20210813205520-393438
	                    minikube.k8s.io/updated_at=2021_08_13T20_57_02_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:56:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210813205520-393438
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:58:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:57:09 +0000   Fri, 13 Aug 2021 20:56:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:57:09 +0000   Fri, 13 Aug 2021 20:56:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:57:09 +0000   Fri, 13 Aug 2021 20:56:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:57:09 +0000   Fri, 13 Aug 2021 20:57:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.151
	  Hostname:    pause-20210813205520-393438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	System Info:
	  Machine ID:                 77eb9b5f6d424569bb9c035580bd499b
	  System UUID:                77eb9b5f-6d42-4569-bb9c-035580bd499b
	  Boot ID:                    f7c4e7cb-b855-4691-ba67-6445018f8c6d
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-jzmnb                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m7s
	  kube-system                 etcd-pause-20210813205520-393438                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m15s
	  kube-system                 kube-apiserver-pause-20210813205520-393438             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 kube-controller-manager-pause-20210813205520-393438    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 kube-proxy-mlf5c                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-scheduler-pause-20210813205520-393438             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  2m41s (x8 over 2m42s)  kubelet     Node pause-20210813205520-393438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m41s (x8 over 2m42s)  kubelet     Node pause-20210813205520-393438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s (x7 over 2m42s)  kubelet     Node pause-20210813205520-393438 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m15s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m15s                  kubelet     Node pause-20210813205520-393438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s                  kubelet     Node pause-20210813205520-393438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s                  kubelet     Node pause-20210813205520-393438 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m15s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m14s                  kubelet     Node pause-20210813205520-393438 status is now: NodeReady
	  Normal  Starting                 2m3s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 60s                    kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000017] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.863604] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.032050] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.917916] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1722 comm=systemd-network
	[  +2.669268] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.335717] vboxguest: loading out-of-tree module taints kernel.
	[  +0.008488] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 20:56] systemd-fstab-generator[2101]: Ignoring "noauto" for root device
	[  +0.927578] systemd-fstab-generator[2132]: Ignoring "noauto" for root device
	[  +0.140064] systemd-fstab-generator[2146]: Ignoring "noauto" for root device
	[  +0.195734] systemd-fstab-generator[2179]: Ignoring "noauto" for root device
	[  +8.321149] systemd-fstab-generator[2386]: Ignoring "noauto" for root device
	[Aug13 20:57] systemd-fstab-generator[2823]: Ignoring "noauto" for root device
	[ +16.072552] kauditd_printk_skb: 38 callbacks suppressed
	[ +34.372009] kauditd_printk_skb: 116 callbacks suppressed
	[  +3.958113] NFSD: Unable to end grace period: -110
	[Aug13 20:58] systemd-fstab-generator[3706]: Ignoring "noauto" for root device
	[  +0.206181] systemd-fstab-generator[3719]: Ignoring "noauto" for root device
	[  +0.261980] systemd-fstab-generator[3744]: Ignoring "noauto" for root device
	[ +19.584639] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.482860] systemd-fstab-generator[4981]: Ignoring "noauto" for root device
	[  +0.846439] systemd-fstab-generator[5035]: Ignoring "noauto" for root device
	[Aug13 20:59] systemd-fstab-generator[5622]: Ignoring "noauto" for root device
	[  +0.795991] systemd-fstab-generator[5652]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5] <==
	* 2021-08-13 20:58:16.461857 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:5" took too long (198.960862ms) to execute
	2021-08-13 20:58:16.462013 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 " with result "range_response_count:0 size:5" took too long (199.025411ms) to execute
	2021-08-13 20:58:16.462116 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (190.42222ms) to execute
	2021-08-13 20:58:16.462337 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (179.184455ms) to execute
	2021-08-13 20:58:16.462702 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (172.711746ms) to execute
	2021-08-13 20:58:16.462925 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (170.528555ms) to execute
	2021-08-13 20:58:16.463221 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (158.293847ms) to execute
	2021-08-13 20:58:16.463747 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (158.490371ms) to execute
	2021-08-13 20:58:16.464124 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 " with result "range_response_count:0 size:5" took too long (152.464331ms) to execute
	2021-08-13 20:58:16.477058 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (151.343452ms) to execute
	2021-08-13 20:58:16.478005 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true " with result "range_response_count:0 size:5" took too long (142.028022ms) to execute
	2021-08-13 20:58:16.478939 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" limit:10000 " with result "range_response_count:0 size:5" took too long (142.259692ms) to execute
	2021-08-13 20:58:16.479721 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 " with result "range_response_count:0 size:5" took too long (129.328346ms) to execute
	2021-08-13 20:58:16.479967 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (126.882803ms) to execute
	2021-08-13 20:58:16.480303 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 " with result "range_response_count:11 size:5977" took too long (116.866258ms) to execute
	2021-08-13 20:58:16.480852 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true " with result "range_response_count:0 size:7" took too long (116.970061ms) to execute
	2021-08-13 20:58:23.354247 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/cluster-admin\" " with result "range_response_count:1 size:718" took too long (1.914180768s) to execute
	2021-08-13 20:58:23.356685 W | etcdserver: request "header:<ID:14244176716868856811 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-20210813205520-393438.169af9452389bd61\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-20210813205520-393438.169af9452389bd61\" value_size:717 lease:5020804680014080881 >> failure:<>>" with result "size:16" took too long (1.23562281s) to execute
	2021-08-13 20:58:23.370142 W | wal: sync duration of 1.250273887s, expected less than 1s
	2021-08-13 20:58:23.370676 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.152835664s) to execute
	2021-08-13 20:58:23.371565 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.728436243s) to execute
	2021-08-13 20:58:23.371769 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.847351028s) to execute
	2021-08-13 20:58:23.378962 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-jzmnb\" " with result "range_response_count:1 size:4862" took too long (671.753147ms) to execute
	2021-08-13 20:58:24.705568 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813205520-393438\" " with result "range_response_count:1 size:4394" took too long (221.501911ms) to execute
	2021-08-13 20:58:26.341296 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [df02c38abac90e1bfb1eaa8433ba9faac330d654e786d0c41901507b55d0c418] <==
	* 2021-08-13 20:56:51.867973 I | embed: serving client requests on 192.168.61.151:2379
	2021-08-13 20:56:51.875825 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:57:01.271062 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" " with result "range_response_count:0 size:5" took too long (480.2351ms) to execute
	2021-08-13 20:57:01.272131 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813205520-393438\" " with result "range_response_count:1 size:3758" took too long (875.676682ms) to execute
	2021-08-13 20:57:01.273551 W | etcdserver: read-only range request "key:\"/registry/events/default/pause-20210813205520-393438.169af930771f12dc\" " with result "range_response_count:1 size:735" took too long (792.283833ms) to execute
	2021-08-13 20:57:02.171621 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (872.818648ms) to execute
	2021-08-13 20:57:02.172160 W | etcdserver: request "header:<ID:14244176716848216677 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813205520-393438\" mod_revision:222 > success:<request_put:<key:\"/registry/minions/pause-20210813205520-393438\" value_size:3993 >> failure:<request_range:<key:\"/registry/minions/pause-20210813205520-393438\" > >>" with result "size:16" took too long (128.660032ms) to execute
	2021-08-13 20:57:02.172330 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:351" took too long (871.615956ms) to execute
	2021-08-13 20:57:02.172733 W | etcdserver: read-only range request "key:\"/registry/events/default/pause-20210813205520-393438.169af930771f2f58\" " with result "range_response_count:1 size:733" took too long (859.92991ms) to execute
	2021-08-13 20:57:02.172849 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813205520-393438\" " with result "range_response_count:0 size:5" took too long (853.236151ms) to execute
	2021-08-13 20:57:09.290631 W | etcdserver: request "header:<ID:14244176716848216792 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813205520-393438\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/pause-20210813205520-393438\" value_size:3277 >> failure:<>>" with result "size:5" took too long (472.704737ms) to execute
	2021-08-13 20:57:09.291659 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/pause-20210813205520-393438\" " with result "range_response_count:0 size:5" took too long (897.879132ms) to execute
	2021-08-13 20:57:09.298807 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210813205520-393438\" " with result "range_response_count:1 size:4986" took too long (528.421007ms) to execute
	2021-08-13 20:57:09.299124 W | etcdserver: read-only range request "key:\"/registry/csinodes/pause-20210813205520-393438\" " with result "range_response_count:1 size:656" took too long (894.254864ms) to execute
	2021-08-13 20:57:13.314052 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:210" took too long (127.466898ms) to execute
	2021-08-13 20:57:13.314663 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (132.387511ms) to execute
	2021-08-13 20:57:16.343764 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:20.988739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:30.989151 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:39.442816 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:422" took too long (120.094417ms) to execute
	2021-08-13 20:57:40.988900 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:50.989064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:58:00.244154 W | etcdserver: request "header:<ID:14244176716848217456 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.151\" mod_revision:483 > success:<request_put:<key:\"/registry/masterleases/192.168.61.151\" value_size:69 lease:5020804679993441646 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.151\" > >>" with result "size:16" took too long (162.220853ms) to execute
	2021-08-13 20:58:00.245134 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (881.389444ms) to execute
	2021-08-13 20:58:00.989778 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:59:23 up 3 min,  0 users,  load average: 0.98, 0.82, 0.35
	Linux pause-20210813205520-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c] <==
	* Trace[553017594]: ---"About to write a response" 1919ms (20:58:00.358)
	Trace[553017594]: [1.920866407s] [1.920866407s] END
	I0813 20:58:23.381663       1 trace.go:205] Trace[1143050190]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-jzmnb,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:58:22.699) (total time: 682ms):
	Trace[1143050190]: ---"About to write a response" 681ms (20:58:00.380)
	Trace[1143050190]: [682.310081ms] [682.310081ms] END
	I0813 20:58:25.230359       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:58:25.281700       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:58:25.373725       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 20:58:25.413105       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 20:58:25.560667       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0813 20:59:15.369992       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:59:15.370169       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:59:15.370213       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0813 20:59:15.548831       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0813 20:59:15.551721       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:59:15.551910       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:59:15.553384       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0813 20:59:16.253189       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0813 20:59:16.253342       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:59:16.254420       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:59:16.258368       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:59:16.261299       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0813 20:59:16.262291       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:59:16.288769       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:59:16.754327       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [11c2753c9a8a79ebfb2fe156a698be51aed9e9d6ac5dfc0af27d0a4822c7d016] <==
	* I0813 20:57:09.309542       1 trace.go:205] Trace[2046907584]: "Create" url:/api/v1/nodes,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.501) (total time: 806ms):
	Trace[2046907584]: [806.482297ms] [806.482297ms] END
	I0813 20:57:09.310802       1 trace.go:205] Trace[146959614]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.771) (total time: 538ms):
	Trace[146959614]: ---"Object stored in database" 538ms (20:57:00.310)
	Trace[146959614]: [538.954794ms] [538.954794ms] END
	I0813 20:57:09.311138       1 trace.go:205] Trace[1128950750]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-20210813205520-393438,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 541ms):
	Trace[1128950750]: ---"About to write a response" 537ms (20:57:00.307)
	Trace[1128950750]: [541.267103ms] [541.267103ms] END
	I0813 20:57:09.311248       1 trace.go:205] Trace[1268223707]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 541ms):
	Trace[1268223707]: ---"Object stored in database" 540ms (20:57:00.310)
	Trace[1268223707]: [541.971563ms] [541.971563ms] END
	I0813 20:57:09.311433       1 trace.go:205] Trace[1977445463]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.772) (total time: 538ms):
	Trace[1977445463]: ---"Object stored in database" 537ms (20:57:00.310)
	Trace[1977445463]: [538.348208ms] [538.348208ms] END
	I0813 20:57:09.321803       1 trace.go:205] Trace[494614999]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 552ms):
	Trace[494614999]: [552.453895ms] [552.453895ms] END
	I0813 20:57:09.345220       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:57:16.259955       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:57:16.380865       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:57:37.272234       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:57:37.272418       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:57:37.272507       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:58:00.246413       1 trace.go:205] Trace[1997979141]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (13-Aug-2021 20:57:59.258) (total time: 987ms):
	Trace[1997979141]: ---"Transaction committed" 984ms (20:58:00.246)
	Trace[1997979141]: [987.521712ms] [987.521712ms] END
	
	* 
	* ==> kube-controller-manager [57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849] <==
	* I0813 20:59:16.709660       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:59:16.712193       1 shared_informer.go:247] Caches are synced for GC 
	I0813 20:59:16.722572       1 shared_informer.go:247] Caches are synced for node 
	I0813 20:59:16.722685       1 range_allocator.go:172] Starting range CIDR allocator
	I0813 20:59:16.722692       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0813 20:59:16.722698       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0813 20:59:16.728359       1 shared_informer.go:247] Caches are synced for endpoint 
	I0813 20:59:16.729318       1 shared_informer.go:247] Caches are synced for taint 
	I0813 20:59:16.729562       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	I0813 20:59:16.729905       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0813 20:59:16.731175       1 event.go:291] "Event occurred" object="pause-20210813205520-393438" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210813205520-393438 event: Registered Node pause-20210813205520-393438 in Controller"
	W0813 20:59:16.731760       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210813205520-393438. Assuming now as a timestamp.
	I0813 20:59:16.732194       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0813 20:59:16.732423       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:59:16.732906       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0813 20:59:16.747287       1 shared_informer.go:247] Caches are synced for TTL 
	I0813 20:59:16.761010       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:59:16.769847       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:59:16.772854       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:59:16.793140       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0813 20:59:16.811782       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:59:16.811797       1 disruption.go:371] Sending events to api server.
	I0813 20:59:17.205296       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:59:17.277302       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:59:17.277855       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [68bad432830642a2624a04015efd233270944ea918f0f82217367834481cc3a8] <==
	* I0813 20:57:15.593972       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:57:15.593991       1 disruption.go:371] Sending events to api server.
	I0813 20:57:15.596695       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:57:15.636700       1 shared_informer.go:247] Caches are synced for service account 
	I0813 20:57:15.652896       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:57:15.701400       1 shared_informer.go:247] Caches are synced for taint 
	I0813 20:57:15.701628       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0813 20:57:15.701702       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210813205520-393438. Assuming now as a timestamp.
	I0813 20:57:15.701748       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0813 20:57:15.701825       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0813 20:57:15.702024       1 event.go:291] "Event occurred" object="pause-20210813205520-393438" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210813205520-393438 event: Registered Node pause-20210813205520-393438 in Controller"
	I0813 20:57:15.735577       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:57:15.751667       1 shared_informer.go:247] Caches are synced for stateful set 
	I0813 20:57:15.767285       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:57:15.796364       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0813 20:57:15.847876       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:57:16.199991       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:57:16.200121       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:57:16.224599       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:57:16.277997       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:57:16.457337       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mlf5c"
	I0813 20:57:16.545672       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-fhxw7"
	I0813 20:57:16.596799       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-jzmnb"
	I0813 20:57:16.804186       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:57:16.819742       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-fhxw7"
	
	* 
	* ==> kube-proxy [0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5] <==
	* E0813 20:58:20.334846       1 node.go:161] Failed to retrieve node info: nodes "pause-20210813205520-393438" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	I0813 20:58:21.364522       1 node.go:172] Successfully retrieved node IP: 192.168.61.151
	I0813 20:58:21.365223       1 server_others.go:140] Detected node IP 192.168.61.151
	W0813 20:58:21.366125       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:58:23.461362       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:58:23.462248       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:58:23.465333       1 server_others.go:212] Using iptables Proxier.
	I0813 20:58:23.483125       1 server.go:643] Version: v1.21.3
	I0813 20:58:23.488959       1 config.go:315] Starting service config controller
	I0813 20:58:23.490323       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:58:23.490593       1 config.go:224] Starting endpoint slice config controller
	I0813 20:58:23.490606       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:58:23.512424       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:58:23.514744       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:58:23.591163       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:58:23.593313       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [1bba0d6deb03392a9c2a729aa9c03a18c3e1586cd458a1f081392f4b04d0ae62] <==
	* I0813 20:57:20.123665       1 node.go:172] Successfully retrieved node IP: 192.168.61.151
	I0813 20:57:20.123841       1 server_others.go:140] Detected node IP 192.168.61.151
	W0813 20:57:20.123909       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:57:20.180054       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:57:20.180158       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:57:20.180173       1 server_others.go:212] Using iptables Proxier.
	I0813 20:57:20.181825       1 server.go:643] Version: v1.21.3
	I0813 20:57:20.184367       1 config.go:315] Starting service config controller
	I0813 20:57:20.184561       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:57:20.184600       1 config.go:224] Starting endpoint slice config controller
	I0813 20:57:20.184604       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:57:20.203222       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:57:20.207174       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:57:20.285130       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:57:20.285144       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c] <==
	* I0813 20:58:11.830530       1 serving.go:347] Generated self-signed cert in-memory
	W0813 20:58:20.220887       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 20:58:20.224373       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 20:58:20.224624       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:58:20.224640       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:58:20.341243       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 20:58:20.343223       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:58:20.343608       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:58:20.347257       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0813 20:58:20.444874       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0813 20:59:05.413646       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831376       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831398       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831633       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831662       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831677       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831719       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSIStorageCapacity ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831730       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831767       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSINode ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831776       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831804       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicaSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831815       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StatefulSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831834       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PodDisruptionBudget ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831859       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	* 
	* ==> kube-scheduler [698bbea7ce6e9ce2ff33d763621c6d0ae027c7205d816ea431cafc6e045b6889] <==
	* I0813 20:56:57.340096       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:56:57.373873       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:56:57.375600       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:56:57.398047       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:56:57.406392       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:56:57.418940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.424521       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:56:57.426539       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:56:57.426578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:56:57.428616       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428717       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428765       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:56:57.428811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:56:57.428897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:56:58.261670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:56:58.311937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:56:58.405804       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.463800       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:56:58.585826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.615525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:56:58.626736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.669986       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:56:58.791820       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0813 20:57:01.440271       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:55:52 UTC, end at Fri 2021-08-13 20:59:23 UTC. --
	Aug 13 20:59:16 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:16.442997    5630 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.121813    5630 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.123739    5630 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.124150    5630 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.124413    5630 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.124800    5630 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.124979    5630 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.125347    5630 remote_runtime.go:62] parsed scheme: ""
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.125615    5630 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.125862    5630 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.126021    5630 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.126351    5630 remote_image.go:50] parsed scheme: ""
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.126598    5630 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.126791    5630 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.126948    5630 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.127298    5630 kubelet.go:404] "Attempting to sync node with API server"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.127591    5630 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.127844    5630 kubelet.go:283] "Adding apiserver pod source"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.128310    5630 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.153644    5630 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="v1.4.9" apiVersion="v1alpha2"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: E0813 20:59:21.470849    5630 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.489062    5630 server.go:1190] "Started kubelet"
	Aug 13 20:59:21 pause-20210813205520-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:59:21 pause-20210813205520-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 90 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000328290, 0xc000000003)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000328280)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003f0480, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bcc80, 0x18e5530, 0xc0003284c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0004ceee0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004ceee0, 0x18b3d60, 0xc000311f80, 0x1, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004ceee0, 0x3b9aca00, 0x0, 0x17a0501, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0004ceee0, 0x3b9aca00, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813205520-393438 -n pause-20210813205520-393438
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813205520-393438 -n pause-20210813205520-393438: exit status 2 (389.063123ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210813205520-393438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210813205520-393438 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210813205520-393438 describe pod : exit status 1 (75.586059ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210813205520-393438 describe pod : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813205520-393438 -n pause-20210813205520-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210813205520-393438 -n pause-20210813205520-393438: exit status 2 (490.620722ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210813205520-393438 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20210813205520-393438 logs -n 25: (1.833267827s)
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:44:04 UTC | Fri, 13 Aug 2021 20:48:01 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	|         | --wait=true -v=8                         |                                          |         |         |                               |                               |
	|         | --alsologtostderr --driver=kvm2          |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| start   | -p                                       | multinode-20210813202658-393438-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:48:01 UTC | Fri, 13 Aug 2021 20:49:01 UTC |
	|         | multinode-20210813202658-393438-m03      |                                          |         |         |                               |                               |
	|         | --driver=kvm2                            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | multinode-20210813202658-393438-m03      | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:02 UTC | Fri, 13 Aug 2021 20:49:03 UTC |
	|         | multinode-20210813202658-393438-m03      |                                          |         |         |                               |                               |
	| delete  | -p                                       | multinode-20210813202658-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:49:03 UTC | Fri, 13 Aug 2021 20:49:05 UTC |
	|         | multinode-20210813202658-393438          |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:50:38 UTC | Fri, 13 Aug 2021 20:52:46 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | --wait=true --preload=false              |                                          |         |         |                               |                               |
	|         | --driver=kvm2                            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:47 UTC | Fri, 13 Aug 2021 20:52:48 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | -- sudo crictl pull busybox              |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:52:48 UTC | Fri, 13 Aug 2021 20:53:39 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2           |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:39 UTC | Fri, 13 Aug 2021 20:53:39 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	|         | -- sudo crictl image ls                  |                                          |         |         |                               |                               |
	| delete  | -p                                       | test-preload-20210813205038-393438       | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:39 UTC | Fri, 13 Aug 2021 20:53:41 UTC |
	|         | test-preload-20210813205038-393438       |                                          |         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:53:41 UTC | Fri, 13 Aug 2021 20:54:41 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:42 UTC | Fri, 13 Aug 2021 20:54:42 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --cancel-scheduled                       |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:54:55 UTC | Fri, 13 Aug 2021 20:55:02 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	|         | --schedule 5s                            |                                          |         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210813205341-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:20 UTC | Fri, 13 Aug 2021 20:55:20 UTC |
	|         | scheduled-stop-20210813205341-393438     |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-containerd-20210813205520-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:21 UTC | Fri, 13 Aug 2021 20:57:33 UTC |
	|         | offline-containerd-20210813205520-393438 |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=kvm2                |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20210813205520-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:33 UTC | Fri, 13 Aug 2021 20:57:35 UTC |
	|         | offline-containerd-20210813205520-393438 |                                          |         |         |                               |                               |
	| start   | -p pause-20210813205520-393438           | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:55:21 UTC | Fri, 13 Aug 2021 20:57:54 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=kvm2                 |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p pause-20210813205520-393438           | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:54 UTC | Fri, 13 Aug 2021 20:58:28 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                       |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:27 UTC | Fri, 13 Aug 2021 20:58:34 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                       |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| logs    | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:34 UTC | Fri, 13 Aug 2021 20:58:35 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	| delete  | -p                                       | stopped-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:35 UTC | Fri, 13 Aug 2021 20:58:36 UTC |
	|         | stopped-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210813205735-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:35 UTC | Fri, 13 Aug 2021 20:58:58 UTC |
	|         | kubernetes-upgrade-20210813205735-393438 |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0             |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210813205735-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:58:58 UTC | Fri, 13 Aug 2021 20:59:00 UTC |
	|         | kubernetes-upgrade-20210813205735-393438 |                                          |         |         |                               |                               |
	| unpause | -p pause-20210813205520-393438           | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:14 UTC | Fri, 13 Aug 2021 20:59:15 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	| -p      | pause-20210813205520-393438              | pause-20210813205520-393438              | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:22 UTC | Fri, 13 Aug 2021 20:59:24 UTC |
	|         | logs -n 25                               |                                          |         |         |                               |                               |
	| start   | -p                                       | running-upgrade-20210813205520-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:57:40 UTC | Fri, 13 Aug 2021 20:59:24 UTC |
	|         | running-upgrade-20210813205520-393438    |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                       |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:59:01
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:59:01.126131  430322 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:59:01.126334  430322 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:59:01.126345  430322 out.go:311] Setting ErrFile to fd 2...
	I0813 20:59:01.126349  430322 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:59:01.126438  430322 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:59:01.126658  430322 out.go:305] Setting JSON to false
	I0813 20:59:01.168018  430322 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6103,"bootTime":1628882238,"procs":194,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:59:01.168125  430322 start.go:121] virtualization: kvm guest
	I0813 20:59:01.170881  430322 out.go:177] * [kubernetes-upgrade-20210813205735-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:59:01.172410  430322 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:59:01.171037  430322 notify.go:169] Checking for updates...
	I0813 20:59:01.173939  430322 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:59:01.175272  430322 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:59:01.176563  430322 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:59:01.176998  430322 config.go:177] Loaded profile config "kubernetes-upgrade-20210813205735-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 20:59:01.177535  430322 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:01.177578  430322 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:01.190824  430322 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42145
	I0813 20:59:01.191328  430322 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:01.192096  430322 main.go:130] libmachine: Using API Version  1
	I0813 20:59:01.192126  430322 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:01.192607  430322 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:01.192838  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:01.193041  430322 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:59:01.193517  430322 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:01.193562  430322 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:01.208017  430322 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37221
	I0813 20:59:01.209032  430322 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:01.209672  430322 main.go:130] libmachine: Using API Version  1
	I0813 20:59:01.209701  430322 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:01.210226  430322 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:01.210435  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:01.248561  430322 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:59:01.248588  430322 start.go:278] selected driver: kvm2
	I0813 20:59:01.248595  430322 start.go:751] validating driver "kvm2" against &{Name:kubernetes-upgrade-20210813205735-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210813205735-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:59:01.248717  430322 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:59:01.250011  430322 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:59:01.250160  430322 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:59:01.262799  430322 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:59:01.263220  430322 cni.go:93] Creating CNI manager for ""
	I0813 20:59:01.263239  430322 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:59:01.263251  430322 start_flags.go:277] config:
	{Name:kubernetes-upgrade-20210813205735-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-2021081320573
5-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:59:01.263415  430322 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:59:01.265573  430322 out.go:177] * Starting control plane node kubernetes-upgrade-20210813205735-393438 in cluster kubernetes-upgrade-20210813205735-393438
	I0813 20:59:01.265601  430322 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:59:01.265646  430322 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:59:01.265665  430322 cache.go:56] Caching tarball of preloaded images
	I0813 20:59:01.265770  430322 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 20:59:01.265790  430322 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0813 20:59:01.265928  430322 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/config.json ...
	I0813 20:59:01.266108  430322 cache.go:205] Successfully downloaded all kic artifacts
	I0813 20:59:01.266142  430322 start.go:313] acquiring machines lock for kubernetes-upgrade-20210813205735-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 20:59:01.266204  430322 start.go:317] acquired machines lock for "kubernetes-upgrade-20210813205735-393438" in 45.243µs
	I0813 20:59:01.266222  430322 start.go:93] Skipping create...Using existing machine configuration
	I0813 20:59:01.266233  430322 fix.go:55] fixHost starting: 
	I0813 20:59:01.266656  430322 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:01.266753  430322 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:01.279307  430322 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45157
	I0813 20:59:01.279807  430322 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:01.280359  430322 main.go:130] libmachine: Using API Version  1
	I0813 20:59:01.280380  430322 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:01.280834  430322 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:01.281039  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:01.281186  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetState
	I0813 20:59:01.284783  430322 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210813205735-393438: state=Stopped err=<nil>
	I0813 20:59:01.284823  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	W0813 20:59:01.284949  430322 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 20:58:59.870373  429844 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.008983457s)
	I0813 20:58:59.871116  429844 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 20:58:59.871287  429844 ssh_runner.go:149] Run: which lz4
	I0813 20:58:59.880284  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0813 20:58:59.880368  429844 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:58:59.885231  429844 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:58:59.885260  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0813 20:59:00.541957  429197 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 20:59:00.550528  429197 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:59:00.550578  429197 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 20:59:00.558098  429197 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 20:59:00.564879  429197 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 20:59:00.564943  429197 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 20:59:00.573424  429197 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:59:00.581588  429197 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 20:59:00.581608  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:59:00.775307  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:59:02.073222  429197 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.297863235s)
	I0813 20:59:02.073255  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:59:02.470011  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:59:02.646948  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:59:02.826557  429197 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:59:02.826636  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:03.341139  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:03.840915  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:04.340501  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:04.841157  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:05.341002  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:01.287021  430322 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-20210813205735-393438" ...
	I0813 20:59:01.287050  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .Start
	I0813 20:59:01.287193  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Ensuring networks are active...
	I0813 20:59:01.289470  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Ensuring network default is active
	I0813 20:59:01.289937  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Ensuring network mk-kubernetes-upgrade-20210813205735-393438 is active
	I0813 20:59:01.290387  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Getting domain xml...
	I0813 20:59:01.292866  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Creating domain...
	I0813 20:59:01.738864  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Waiting to get IP...
	I0813 20:59:01.739927  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:01.740403  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has current primary IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:01.740436  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Found IP for machine: 192.168.39.75
	I0813 20:59:01.740459  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Reserving static IP address...
	I0813 20:59:01.741000  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-20210813205735-393438", mac: "52:54:00:50:ef:93", ip: "192.168.39.75"} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:01.741035  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | skip adding static IP to network mk-kubernetes-upgrade-20210813205735-393438 - found existing host DHCP lease matching {name: "kubernetes-upgrade-20210813205735-393438", mac: "52:54:00:50:ef:93", ip: "192.168.39.75"}
	I0813 20:59:01.741051  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Reserved static IP address: 192.168.39.75
	I0813 20:59:01.741072  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Waiting for SSH to be available...
	I0813 20:59:01.741092  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | Getting to WaitForSSH function...
	I0813 20:59:01.747891  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:01.748311  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:57:58 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:01.748341  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:01.748794  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | Using SSH client type: external
	I0813 20:59:01.748825  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa (-rw-------)
	I0813 20:59:01.748867  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 20:59:01.748881  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | About to run SSH command:
	I0813 20:59:01.748893  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | exit 0
	I0813 20:59:04.251439  429844 containerd.go:546] Took 4.371086 seconds to copy over tarball
	I0813 20:59:04.251530  429844 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 20:59:05.840828  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:06.340641  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:06.840589  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:07.340738  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:07.840607  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:08.340740  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:08.841423  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:09.341300  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:09.840616  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:10.340878  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:10.840590  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:11.340551  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:11.840776  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:12.341416  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:12.841143  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:13.340641  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:13.840699  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:14.340891  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:14.841272  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:15.340868  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:15.367894  429197 api_server.go:70] duration metric: took 12.541339791s to wait for apiserver process to appear ...
	I0813 20:59:15.367920  429197 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:59:15.367932  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:15.369463  429197 api_server.go:255] stopped: https://192.168.72.177:8443/healthz: Get "https://192.168.72.177:8443/healthz": dial tcp 192.168.72.177:8443: connect: connection refused
	I0813 20:59:14.967460  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 20:59:14.967832  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetConfigRaw
	I0813 20:59:14.968579  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetIP
	I0813 20:59:14.975946  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:14.976590  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:14.976617  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:14.977080  430322 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/kubernetes-upgrade-20210813205735-393438/config.json ...
	I0813 20:59:14.977327  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:14.977519  430322 machine.go:88] provisioning docker machine ...
	I0813 20:59:14.977551  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:14.977744  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetMachineName
	I0813 20:59:14.977932  430322 buildroot.go:166] provisioning hostname "kubernetes-upgrade-20210813205735-393438"
	I0813 20:59:14.977959  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetMachineName
	I0813 20:59:14.978140  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:14.984614  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:14.984978  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:14.985004  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:14.985249  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:14.985456  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:14.985630  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:14.985808  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:14.986008  430322 main.go:130] libmachine: Using SSH client type: native
	I0813 20:59:14.986206  430322 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0813 20:59:14.986228  430322 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210813205735-393438 && echo "kubernetes-upgrade-20210813205735-393438" | sudo tee /etc/hostname
	I0813 20:59:15.154609  430322 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210813205735-393438
	
	I0813 20:59:15.154644  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:15.161683  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.162112  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:15.162145  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.162482  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:15.162710  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:15.162936  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:15.163108  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:15.163323  430322 main.go:130] libmachine: Using SSH client type: native
	I0813 20:59:15.163532  430322 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0813 20:59:15.163559  430322 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210813205735-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210813205735-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210813205735-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 20:59:15.322283  430322 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 20:59:15.322316  430322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 20:59:15.322360  430322 buildroot.go:174] setting up certificates
	I0813 20:59:15.322375  430322 provision.go:83] configureAuth start
	I0813 20:59:15.322388  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetMachineName
	I0813 20:59:15.322753  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetIP
	I0813 20:59:15.329092  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.329518  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:15.329541  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.330028  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:15.335895  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.336319  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:15.336343  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.336720  430322 provision.go:138] copyHostCerts
	I0813 20:59:15.336809  430322 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 20:59:15.336821  430322 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 20:59:15.336872  430322 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 20:59:15.336988  430322 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 20:59:15.336997  430322 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 20:59:15.337026  430322 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 20:59:15.337088  430322 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 20:59:15.337150  430322 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 20:59:15.337183  430322 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 20:59:15.337294  430322 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210813205735-393438 san=[192.168.39.75 192.168.39.75 localhost 127.0.0.1 minikube kubernetes-upgrade-20210813205735-393438]
	I0813 20:59:15.804742  430322 provision.go:172] copyRemoteCerts
	I0813 20:59:15.804803  430322 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 20:59:15.804839  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:15.810630  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.811009  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:15.811045  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:15.811311  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:15.811513  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:15.811677  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:15.811829  430322 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:59:15.907977  430322 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 20:59:15.926182  430322 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0813 20:59:15.969182  430322 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 20:59:15.994398  430322 provision.go:86] duration metric: configureAuth took 672.012293ms
	I0813 20:59:15.994426  430322 buildroot.go:189] setting minikube options for container-runtime
	I0813 20:59:15.994619  430322 config.go:177] Loaded profile config "kubernetes-upgrade-20210813205735-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:59:15.994636  430322 machine.go:91] provisioned docker machine in 1.017093923s
	I0813 20:59:15.994647  430322 start.go:267] post-start starting for "kubernetes-upgrade-20210813205735-393438" (driver="kvm2")
	I0813 20:59:15.994656  430322 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 20:59:15.994706  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:15.998921  430322 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 20:59:15.998955  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:16.005687  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.006139  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:16.006166  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.006764  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:16.006965  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:16.007139  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:16.007310  430322 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:59:14.775039  429844 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.523480791s)
	I0813 20:59:14.775063  429844 containerd.go:553] Took 10.523589 seconds t extract the tarball
	I0813 20:59:14.775075  429844 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 20:59:14.847701  429844 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:59:15.046932  429844 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:59:15.115651  429844 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 20:59:15.156721  429844 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 20:59:15.176516  429844 docker.go:153] disabling docker service ...
	I0813 20:59:15.176575  429844 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 20:59:15.195373  429844 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 20:59:15.208929  429844 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 20:59:15.423095  429844 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 20:59:15.651161  429844 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 20:59:15.667410  429844 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 20:59:15.686989  429844 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSB0cnVlCgogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBbcGx1Z2lucy5jcmkuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0LmQiC
iAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 20:59:15.707356  429844 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 20:59:15.716851  429844 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 20:59:15.716901  429844 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 20:59:15.737623  429844 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 20:59:15.746974  429844 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 20:59:15.934895  429844 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 20:59:16.002845  429844 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 20:59:16.002896  429844 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:59:16.010688  429844 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 20:59:17.115625  429844 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 20:59:17.122401  429844 start.go:413] Will wait 60s for crictl version
	I0813 20:59:17.122460  429844 ssh_runner.go:149] Run: sudo crictl version
	I0813 20:59:17.161733  429844 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 20:59:17.161805  429844 ssh_runner.go:149] Run: containerd --version
	I0813 20:59:17.195014  429844 ssh_runner.go:149] Run: containerd --version
	I0813 20:59:17.227710  429844 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 20:59:17.227794  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) Calling .GetIP
	I0813 20:59:17.233294  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:59:17.233714  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fb:29", ip: ""} in network mk-force-systemd-env-20210813205836-393438: {Iface:virbr5 ExpiryTime:2021-08-13 21:58:53 +0000 UTC Type:0 Mac:52:54:00:ec:fb:29 Iaid: IPaddr:192.168.83.204 Prefix:24 Hostname:force-systemd-env-20210813205836-393438 Clientid:01:52:54:00:ec:fb:29}
	I0813 20:59:17.233767  429844 main.go:130] libmachine: (force-systemd-env-20210813205836-393438) DBG | domain force-systemd-env-20210813205836-393438 has defined IP address 192.168.83.204 and MAC address 52:54:00:ec:fb:29 in network mk-force-systemd-env-20210813205836-393438
	I0813 20:59:17.233945  429844 ssh_runner.go:149] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0813 20:59:17.238972  429844 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:59:17.252336  429844 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:59:17.252395  429844 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:59:17.286326  429844 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:59:17.286348  429844 containerd.go:517] Images already preloaded, skipping extraction
	I0813 20:59:17.286402  429844 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:59:17.318852  429844 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 20:59:17.318878  429844 cache_images.go:74] Images are preloaded, skipping loading
	I0813 20:59:17.318944  429844 ssh_runner.go:149] Run: sudo crictl info
	I0813 20:59:17.350917  429844 cni.go:93] Creating CNI manager for ""
	I0813 20:59:17.350945  429844 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:59:17.350958  429844 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 20:59:17.350973  429844 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.204 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-20210813205836-393438 NodeName:force-systemd-env-20210813205836-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.83.204 CgroupD
river:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 20:59:17.351127  429844 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "force-systemd-env-20210813205836-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 20:59:17.351246  429844 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=force-systemd-env-20210813205836-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.204 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210813205836-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 20:59:17.351312  429844 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 20:59:17.359902  429844 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 20:59:17.359971  429844 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 20:59:17.366944  429844 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (555 bytes)
	I0813 20:59:17.379424  429844 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 20:59:17.391585  429844 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
	I0813 20:59:17.406745  429844 ssh_runner.go:149] Run: grep 192.168.83.204	control-plane.minikube.internal$ /etc/hosts
	I0813 20:59:17.411069  429844 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 20:59:17.422330  429844 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438 for IP: 192.168.83.204
	I0813 20:59:17.422387  429844 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 20:59:17.422411  429844 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 20:59:17.422464  429844 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.key
	I0813 20:59:17.422475  429844 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.crt with IP's: []
	I0813 20:59:17.589717  429844 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.crt ...
	I0813 20:59:17.589751  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.crt: {Name:mk6b638656acdae073d352761d68fbce2d483a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.589968  429844 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.key ...
	I0813 20:59:17.589989  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/client.key: {Name:mk9ac2f2539726279887f5743a365e91b105c985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.590095  429844 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key.34cc45b1
	I0813 20:59:17.590108  429844 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt.34cc45b1 with IP's: [192.168.83.204 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 20:59:17.670870  429844 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt.34cc45b1 ...
	I0813 20:59:17.670902  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt.34cc45b1: {Name:mk74913ff00a65a919a90225ed0787c2cfa299a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.671104  429844 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key.34cc45b1 ...
	I0813 20:59:17.671122  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key.34cc45b1: {Name:mk4e832137b5b0e86e874baf2ff0ef565095edcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.671224  429844 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt.34cc45b1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt
	I0813 20:59:17.671292  429844 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key.34cc45b1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key
	I0813 20:59:17.671342  429844 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.key
	I0813 20:59:17.671350  429844 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.crt with IP's: []
	I0813 20:59:17.811967  429844 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.crt ...
	I0813 20:59:17.811999  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.crt: {Name:mka2285c334ce140bfabe8b380f8ad699fb95705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.812195  429844 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.key ...
	I0813 20:59:17.812213  429844 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.key: {Name:mkb6682fffa6b297b1d5d051bc98ddc5e21e1737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:17.812326  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0813 20:59:17.812347  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0813 20:59:17.812362  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0813 20:59:17.812374  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0813 20:59:17.812388  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 20:59:17.812401  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 20:59:17.812413  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 20:59:17.812425  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 20:59:17.812864  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 20:59:17.812941  429844 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 20:59:17.812954  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 20:59:17.812992  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 20:59:17.813034  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 20:59:17.813070  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 20:59:17.813140  429844 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 20:59:17.813182  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem -> /usr/share/ca-certificates/393438.pem
	I0813 20:59:17.813203  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> /usr/share/ca-certificates/3934382.pem
	I0813 20:59:17.813220  429844 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:59:17.815185  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 20:59:17.835646  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 20:59:17.854490  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 20:59:17.870584  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/force-systemd-env-20210813205836-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 20:59:17.888194  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 20:59:17.904940  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 20:59:17.923217  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 20:59:17.939714  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 20:59:17.956070  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 20:59:17.973489  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 20:59:17.990533  429844 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 20:59:18.006640  429844 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 20:59:18.018556  429844 ssh_runner.go:149] Run: openssl version
	I0813 20:59:18.026398  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 20:59:18.036506  429844 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 20:59:18.043248  429844 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 20:59:18.043295  429844 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 20:59:18.051624  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 20:59:18.060163  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 20:59:18.068077  429844 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 20:59:18.073070  429844 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 20:59:18.073115  429844 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 20:59:18.079189  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 20:59:18.086616  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 20:59:18.094245  429844 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:59:18.098831  429844 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:59:18.098874  429844 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 20:59:18.104647  429844 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 20:59:18.112378  429844 kubeadm.go:390] StartCluster: {Name:force-systemd-env-20210813205836-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.21.3 ClusterName:force-systemd-env-20210813205836-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.204 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:59:18.112457  429844 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 20:59:18.112500  429844 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 20:59:18.152531  429844 cri.go:76] found id: ""
	I0813 20:59:18.152584  429844 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 20:59:18.160520  429844 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 20:59:18.167754  429844 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 20:59:18.174622  429844 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 20:59:18.174683  429844 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 20:59:15.870106  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:19.916324  429197 api_server.go:265] https://192.168.72.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 20:59:19.916355  429197 api_server.go:101] status: https://192.168.72.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 20:59:20.369815  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:20.377224  429197 api_server.go:265] https://192.168.72.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:59:20.377246  429197 api_server.go:101] status: https://192.168.72.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:59:16.101618  430322 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 20:59:16.107180  430322 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 20:59:16.107204  430322 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 20:59:16.107264  430322 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 20:59:16.107365  430322 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 20:59:16.107482  430322 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 20:59:16.116337  430322 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 20:59:16.137631  430322 start.go:270] post-start completed in 142.968259ms
	I0813 20:59:16.137673  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:16.137960  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:16.143893  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.144357  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:16.144393  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.144525  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:16.144729  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:16.144879  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:16.145013  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:16.145162  430322 main.go:130] libmachine: Using SSH client type: native
	I0813 20:59:16.145340  430322 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0813 20:59:16.145356  430322 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 20:59:16.273390  430322 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888356.165572851
	
	I0813 20:59:16.273423  430322 fix.go:212] guest clock: 1628888356.165572851
	I0813 20:59:16.273435  430322 fix.go:225] Guest: 2021-08-13 20:59:16.165572851 +0000 UTC Remote: 2021-08-13 20:59:16.137936029 +0000 UTC m=+15.073653376 (delta=27.636822ms)
	I0813 20:59:16.273466  430322 fix.go:196] guest clock delta is within tolerance: 27.636822ms
	I0813 20:59:16.273484  430322 fix.go:57] fixHost completed within 15.007250096s
	I0813 20:59:16.273495  430322 start.go:80] releasing machines lock for "kubernetes-upgrade-20210813205735-393438", held for 15.007279625s
	I0813 20:59:16.273547  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:16.273864  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetIP
	I0813 20:59:16.280481  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.280887  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:16.280930  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.281057  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:16.281265  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:16.281833  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .DriverName
	I0813 20:59:16.282114  430322 ssh_runner.go:149] Run: systemctl --version
	I0813 20:59:16.282129  430322 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 20:59:16.282147  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:16.282183  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHHostname
	I0813 20:59:16.289563  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.290185  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.290473  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:16.290560  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.290775  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:16.290971  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:16.291085  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ef:93", ip: ""} in network mk-kubernetes-upgrade-20210813205735-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:59:13 +0000 UTC Type:0 Mac:52:54:00:50:ef:93 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:kubernetes-upgrade-20210813205735-393438 Clientid:01:52:54:00:50:ef:93}
	I0813 20:59:16.291206  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:16.291246  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) DBG | domain kubernetes-upgrade-20210813205735-393438 has defined IP address 192.168.39.75 and MAC address 52:54:00:50:ef:93 in network mk-kubernetes-upgrade-20210813205735-393438
	I0813 20:59:16.291408  430322 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:59:16.291516  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHPort
	I0813 20:59:16.291696  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHKeyPath
	I0813 20:59:16.291840  430322 main.go:130] libmachine: (kubernetes-upgrade-20210813205735-393438) Calling .GetSSHUsername
	I0813 20:59:16.291971  430322 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kubernetes-upgrade-20210813205735-393438/id_rsa Username:docker}
	I0813 20:59:16.377402  430322 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:59:16.377543  430322 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 20:59:20.423651  430322 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.04608471s)
	I0813 20:59:20.423837  430322 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 20:59:20.423909  430322 ssh_runner.go:149] Run: which lz4
	I0813 20:59:20.428467  430322 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 20:59:20.433162  430322 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 20:59:20.433188  430322 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (945588089 bytes)
	I0813 20:59:18.857423  429844 out.go:204]   - Generating certificates and keys ...
	I0813 20:59:21.576816  429844 out.go:204]   - Booting up control plane ...
	I0813 20:59:20.869652  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:21.055874  429197 api_server.go:265] https://192.168.72.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:59:21.055917  429197 api_server.go:101] status: https://192.168.72.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:59:21.370125  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:21.389753  429197 api_server.go:265] https://192.168.72.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 20:59:21.389788  429197 api_server.go:101] status: https://192.168.72.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 20:59:21.874753  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:21.890465  429197 api_server.go:265] https://192.168.72.177:8443/healthz returned 200:
	ok
	I0813 20:59:21.902570  429197 api_server.go:139] control plane version: v1.20.0
	I0813 20:59:21.902598  429197 api_server.go:129] duration metric: took 6.534669952s to wait for apiserver health ...
	I0813 20:59:21.902612  429197 cni.go:93] Creating CNI manager for ""
	I0813 20:59:21.902621  429197 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:59:21.904775  429197 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 20:59:21.904840  429197 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 20:59:21.918864  429197 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 20:59:21.941920  429197 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:59:21.958653  429197 system_pods.go:59] 7 kube-system pods found
	I0813 20:59:21.960060  429197 system_pods.go:61] "coredns-74ff55c5b-ttqx2" [5fa519a7-e077-42ec-8813-20a70746db5d] Running
	I0813 20:59:21.960079  429197 system_pods.go:61] "etcd-running-upgrade-20210813205520-393438" [fe83f029-cb21-4cad-ac77-a85f49e7fce9] Running
	I0813 20:59:21.960087  429197 system_pods.go:61] "kube-apiserver-running-upgrade-20210813205520-393438" [c1e2f30a-1a50-43df-954b-a45b1270ecea] Running
	I0813 20:59:21.960094  429197 system_pods.go:61] "kube-controller-manager-running-upgrade-20210813205520-393438" [cff9407f-8f50-4fb7-a0ab-94fc0bd172a6] Running
	I0813 20:59:21.960099  429197 system_pods.go:61] "kube-proxy-n27fb" [bc2792f7-9b16-46cb-935f-f338306bfa30] Running
	I0813 20:59:21.960119  429197 system_pods.go:61] "kube-scheduler-running-upgrade-20210813205520-393438" [0a5ba635-fdf2-4704-8220-cdeef517411e] Running
	I0813 20:59:21.960131  429197 system_pods.go:61] "storage-provisioner" [11ab50e9-0e9e-4491-8ebc-67c7a715291f] Running
	I0813 20:59:21.960139  429197 system_pods.go:74] duration metric: took 18.193699ms to wait for pod list to return data ...
	I0813 20:59:21.960171  429197 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:59:21.965172  429197 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:59:21.965378  429197 node_conditions.go:123] node cpu capacity is 2
	I0813 20:59:21.965421  429197 node_conditions.go:105] duration metric: took 5.218443ms to run NodePressure ...
	I0813 20:59:21.965447  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 20:59:22.577680  429197 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 20:59:22.603858  429197 ops.go:34] apiserver oom_adj: -16
	I0813 20:59:22.603884  429197 kubeadm.go:604] restartCluster took 34.724172272s
	I0813 20:59:22.603907  429197 kubeadm.go:392] StartCluster complete in 34.836238371s
	I0813 20:59:22.603928  429197 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:22.604043  429197 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:59:22.605642  429197 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 20:59:22.606807  429197 kapi.go:59] client config for running-upgrade-20210813205520-393438: &rest.Config{Host:"https://192.168.72.177:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813205520-393438/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles
/running-upgrade-20210813205520-393438/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:59:22.619466  429197 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "running-upgrade-20210813205520-393438" rescaled to 1
	I0813 20:59:22.619522  429197 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.72.177 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}
	I0813 20:59:22.621692  429197 out.go:177] * Verifying Kubernetes components...
	I0813 20:59:22.621764  429197 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:59:22.619684  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 20:59:22.619704  429197 addons.go:342] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0813 20:59:22.621911  429197 addons.go:59] Setting storage-provisioner=true in profile "running-upgrade-20210813205520-393438"
	I0813 20:59:22.621929  429197 addons.go:135] Setting addon storage-provisioner=true in "running-upgrade-20210813205520-393438"
	W0813 20:59:22.621936  429197 addons.go:147] addon storage-provisioner should already be in state true
	I0813 20:59:22.621963  429197 host.go:66] Checking if "running-upgrade-20210813205520-393438" exists ...
	I0813 20:59:22.622493  429197 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:22.622530  429197 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:22.619917  429197 config.go:177] Loaded profile config "running-upgrade-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0813 20:59:22.622616  429197 addons.go:59] Setting default-storageclass=true in profile "running-upgrade-20210813205520-393438"
	I0813 20:59:22.622635  429197 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-20210813205520-393438"
	I0813 20:59:22.623179  429197 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:22.623223  429197 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:22.637873  429197 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44345
	I0813 20:59:22.638578  429197 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:22.639220  429197 main.go:130] libmachine: Using API Version  1
	I0813 20:59:22.639239  429197 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:22.639898  429197 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:22.640523  429197 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:22.640570  429197 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:22.652781  429197 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0813 20:59:22.653292  429197 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:22.653863  429197 main.go:130] libmachine: Using API Version  1
	I0813 20:59:22.653887  429197 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:22.654095  429197 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0813 20:59:22.654278  429197 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:22.654479  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetState
	I0813 20:59:22.654529  429197 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:22.655752  429197 main.go:130] libmachine: Using API Version  1
	I0813 20:59:22.655773  429197 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:22.656189  429197 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:22.656355  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetState
	I0813 20:59:22.659996  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .DriverName
	I0813 20:59:22.660189  429197 kapi.go:59] client config for running-upgrade-20210813205520-393438: &rest.Config{Host:"https://192.168.72.177:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/running-upgrade-20210813205520-393438/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles
/running-upgrade-20210813205520-393438/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 20:59:22.664553  429197 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 20:59:22.664693  429197 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:59:22.664710  429197 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 20:59:22.664731  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:59:22.671091  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:59:22.671647  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:59:22.671670  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:59:22.671848  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHPort
	I0813 20:59:22.672043  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:59:22.672281  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:59:22.672449  429197 sshutil.go:53] new ssh client: &{IP:192.168.72.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/running-upgrade-20210813205520-393438/id_rsa Username:docker}
	I0813 20:59:22.689765  429197 addons.go:135] Setting addon default-storageclass=true in "running-upgrade-20210813205520-393438"
	W0813 20:59:22.689792  429197 addons.go:147] addon default-storageclass should already be in state true
	I0813 20:59:22.689822  429197 host.go:66] Checking if "running-upgrade-20210813205520-393438" exists ...
	I0813 20:59:22.690282  429197 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:22.690332  429197 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:22.707262  429197 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0813 20:59:22.707809  429197 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:22.708384  429197 main.go:130] libmachine: Using API Version  1
	I0813 20:59:22.708407  429197 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:22.709099  429197 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:22.709938  429197 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:59:22.709984  429197 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:59:22.724751  429197 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0813 20:59:22.725194  429197 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:59:22.725797  429197 main.go:130] libmachine: Using API Version  1
	I0813 20:59:22.725825  429197 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:59:22.726262  429197 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:59:22.726435  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetState
	I0813 20:59:22.730833  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .DriverName
	I0813 20:59:22.734924  429197 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 20:59:22.734941  429197 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 20:59:22.734962  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHHostname
	I0813 20:59:22.742312  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:59:22.742835  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:ee:19", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2021-08-13 21:56:44 +0000 UTC Type:0 Mac:52:54:00:51:ee:19 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:running-upgrade-20210813205520-393438 Clientid:01:52:54:00:51:ee:19}
	I0813 20:59:22.742966  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | domain running-upgrade-20210813205520-393438 has defined IP address 192.168.72.177 and MAC address 52:54:00:51:ee:19 in network minikube-net
	I0813 20:59:22.743309  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHPort
	I0813 20:59:22.743505  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHKeyPath
	I0813 20:59:22.743687  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .GetSSHUsername
	I0813 20:59:22.743903  429197 sshutil.go:53] new ssh client: &{IP:192.168.72.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/running-upgrade-20210813205520-393438/id_rsa Username:docker}
	I0813 20:59:22.826570  429197 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 20:59:22.886297  429197 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 20:59:23.210554  429197 api_server.go:50] waiting for apiserver process to appear ...
	I0813 20:59:23.210637  429197 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:59:23.211251  429197 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 20:59:24.374857  429197 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.548247452s)
	I0813 20:59:24.374906  429197 main.go:130] libmachine: Making call to close driver server
	I0813 20:59:24.374918  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .Close
	I0813 20:59:24.374928  429197 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.488599374s)
	I0813 20:59:24.374961  429197 main.go:130] libmachine: Making call to close driver server
	I0813 20:59:24.374979  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .Close
	I0813 20:59:24.375017  429197 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.164366671s)
	I0813 20:59:24.375031  429197 api_server.go:70] duration metric: took 1.75547246s to wait for apiserver process to appear ...
	I0813 20:59:24.375044  429197 api_server.go:86] waiting for apiserver healthz status ...
	I0813 20:59:24.375054  429197 api_server.go:239] Checking apiserver healthz at https://192.168.72.177:8443/healthz ...
	I0813 20:59:24.375210  429197 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:59:24.375232  429197 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:59:24.375243  429197 main.go:130] libmachine: Making call to close driver server
	I0813 20:59:24.375252  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .Close
	I0813 20:59:24.375406  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | Closing plugin on server side
	I0813 20:59:24.375422  429197 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:59:24.375434  429197 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:59:24.375442  429197 main.go:130] libmachine: Making call to close driver server
	I0813 20:59:24.375451  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .Close
	I0813 20:59:24.375526  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | Closing plugin on server side
	I0813 20:59:24.375561  429197 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:59:24.375568  429197 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:59:24.375724  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) DBG | Closing plugin on server side
	I0813 20:59:24.375812  429197 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:59:24.375851  429197 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:59:24.375869  429197 main.go:130] libmachine: Making call to close driver server
	I0813 20:59:24.375891  429197 main.go:130] libmachine: (running-upgrade-20210813205520-393438) Calling .Close
	I0813 20:59:24.376089  429197 main.go:130] libmachine: Successfully made call to close driver server
	I0813 20:59:24.376105  429197 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 20:59:24.378279  429197 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 20:59:24.378301  429197 addons.go:344] enableAddons completed in 1.758604286s
	I0813 20:59:24.392763  429197 api_server.go:265] https://192.168.72.177:8443/healthz returned 200:
	ok
	I0813 20:59:24.393708  429197 api_server.go:139] control plane version: v1.20.0
	I0813 20:59:24.393728  429197 api_server.go:129] duration metric: took 18.6779ms to wait for apiserver health ...
	I0813 20:59:24.393738  429197 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 20:59:24.404645  429197 system_pods.go:59] 7 kube-system pods found
	I0813 20:59:24.404673  429197 system_pods.go:61] "coredns-74ff55c5b-ttqx2" [5fa519a7-e077-42ec-8813-20a70746db5d] Running
	I0813 20:59:24.404681  429197 system_pods.go:61] "etcd-running-upgrade-20210813205520-393438" [fe83f029-cb21-4cad-ac77-a85f49e7fce9] Running
	I0813 20:59:24.404687  429197 system_pods.go:61] "kube-apiserver-running-upgrade-20210813205520-393438" [c1e2f30a-1a50-43df-954b-a45b1270ecea] Running
	I0813 20:59:24.404700  429197 system_pods.go:61] "kube-controller-manager-running-upgrade-20210813205520-393438" [cff9407f-8f50-4fb7-a0ab-94fc0bd172a6] Running
	I0813 20:59:24.404705  429197 system_pods.go:61] "kube-proxy-n27fb" [bc2792f7-9b16-46cb-935f-f338306bfa30] Running
	I0813 20:59:24.404711  429197 system_pods.go:61] "kube-scheduler-running-upgrade-20210813205520-393438" [0a5ba635-fdf2-4704-8220-cdeef517411e] Running
	I0813 20:59:24.404716  429197 system_pods.go:61] "storage-provisioner" [11ab50e9-0e9e-4491-8ebc-67c7a715291f] Running
	I0813 20:59:24.404723  429197 system_pods.go:74] duration metric: took 10.979983ms to wait for pod list to return data ...
	I0813 20:59:24.404737  429197 kubeadm.go:547] duration metric: took 1.785179858s to wait for : map[apiserver:true system_pods:true] ...
	I0813 20:59:24.404761  429197 node_conditions.go:102] verifying NodePressure condition ...
	I0813 20:59:24.411084  429197 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 20:59:24.411108  429197 node_conditions.go:123] node cpu capacity is 2
	I0813 20:59:24.411121  429197 node_conditions.go:105] duration metric: took 6.354065ms to run NodePressure ...
	I0813 20:59:24.411132  429197 start.go:231] waiting for startup goroutines ...
	I0813 20:59:24.488430  429197 start.go:462] kubectl: 1.20.5, cluster: 1.20.0 (minor skew: 0)
	I0813 20:59:24.490754  429197 out.go:177] * Done! kubectl is now configured to use "running-upgrade-20210813205520-393438" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	33fae69af6bcf       6e38f40d628db       59 seconds ago       Exited              storage-provisioner       0                   76aee79f917be
	b6372d9d76486       296a6d5035e2d       About a minute ago   Running             coredns                   1                   cfc4c8785e479
	afabb5f130410       0369cf4303ffd       About a minute ago   Running             etcd                      1                   3f41ec729ef71
	57f3f32f280d8       bc2bb319a7038       About a minute ago   Running             kube-controller-manager   1                   ce1823a3db17a
	1053b5b4ba3ab       3d174f00aa39e       About a minute ago   Running             kube-apiserver            1                   a655f217cf1c5
	0d1a942c8b8c2       adb2816ea823a       About a minute ago   Running             kube-proxy                1                   47e050012dbca
	1d84b053549cf       6be0dc1302e30       About a minute ago   Running             kube-scheduler            1                   53f314c6cf963
	1bba0d6deb033       adb2816ea823a       2 minutes ago        Exited              kube-proxy                0                   3f6f239c2851f
	63c0cc1fc4c0c       296a6d5035e2d       2 minutes ago        Exited              coredns                   0                   b1f1f31f28005
	698bbea7ce6e9       6be0dc1302e30       2 minutes ago        Exited              kube-scheduler            0                   5a66336a35add
	df02c38abac90       0369cf4303ffd       2 minutes ago        Exited              etcd                      0                   4cf745987f602
	68bad43283064       bc2bb319a7038       2 minutes ago        Exited              kube-controller-manager   0                   5340b4aa5ca39
	11c2753c9a8a7       3d174f00aa39e       2 minutes ago        Exited              kube-apiserver            0                   304b611d719ea
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 20:55:52 UTC, end at Fri 2021-08-13 20:59:26 UTC. --
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.078142311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-pause-20210813205520-393438,Uid:86a000e5c08d32d80b2fd4e89cd34dd1,Namespace:kube-system,Attempt:1,} returns sandbox id \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.145266794Z" level=info msg="CreateContainer within sandbox \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\" for container &ContainerMetadata{Name:etcd,Attempt:1,}"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.321521915Z" level=info msg="StartContainer for \"1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c\" returns successfully"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.349622186Z" level=info msg="CreateContainer within sandbox \"3f41ec729ef71933ec60f8fb632875419302e95acc029a319006f332461cf7cf\" for &ContainerMetadata{Name:etcd,Attempt:1,} returns container id \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.353268082Z" level=info msg="StartContainer for \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.376810925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-jzmnb,Uid:ea00ae4c-f4d9-414c-8762-6314a96c8a06,Namespace:kube-system,Attempt:1,} returns sandbox id \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.451595226Z" level=info msg="CreateContainer within sandbox \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.633919582Z" level=info msg="CreateContainer within sandbox \"cfc4c8785e479503ceb2167b74d4f0b99b2deda88e0bc8dd63814c860e14a682\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.635324605Z" level=info msg="StartContainer for \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\""
	Aug 13 20:58:11 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:11.770314446Z" level=info msg="StartContainer for \"57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849\" returns successfully"
	Aug 13 20:58:12 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:12.016041628Z" level=info msg="StartContainer for \"afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5\" returns successfully"
	Aug 13 20:58:12 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:12.229109322Z" level=info msg="StartContainer for \"b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d\" returns successfully"
	Aug 13 20:58:15 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:15.472167045Z" level=info msg="StartContainer for \"0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5\" returns successfully"
	Aug 13 20:58:25 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:25.856093567Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:99920d7c-bb8d-4c65-bf44-b56f23a40e53,Namespace:kube-system,Attempt:0,}"
	Aug 13 20:58:25 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:25.901091488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a pid=4886
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.481756294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:99920d7c-bb8d-4c65-bf44-b56f23a40e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.492027606Z" level=info msg="CreateContainer within sandbox \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.607213854Z" level=info msg="CreateContainer within sandbox \"76aee79f917be71c3e014a4ade7ae7ab1f11efd5f870006e1575f690175c605a\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.614295374Z" level=info msg="StartContainer for \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:26 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:26.876068804Z" level=info msg="StartContainer for \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\" returns successfully"
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.156236073Z" level=info msg="Finish piping stderr of container \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.158102368Z" level=info msg="Finish piping stdout of container \"33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81\""
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.159567062Z" level=info msg="TaskExit event &TaskExit{ContainerID:33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81,ID:33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81,Pid:4945,ExitStatus:255,ExitedAt:2021-08-13 20:58:41.157732657 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.217770540Z" level=info msg="shim disconnected" id=33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81
	Aug 13 20:58:41 pause-20210813205520-393438 containerd[3753]: time="2021-08-13T20:58:41.217941244Z" level=error msg="copy shim log" error="read /proc/self/fd/98: file already closed"
	
	* 
	* ==> coredns [63c0cc1fc4c0cb78fac8fe29e80eed8b43fa6762ce189d85564911aed6114ba0] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 6b95276539722f40f4545af91578505c
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0813 20:57:49.980199       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.978) (total time: 30001ms):
	Trace[2019727887]: [30.001847928s] [30.001847928s] END
	E0813 20:57:49.980279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:57:49.980655       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.975) (total time: 30005ms):
	Trace[939984059]: [30.00501838s] [30.00501838s] END
	E0813 20:57:49.980691       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0813 20:57:49.981307       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (13-Aug-2021 20:57:19.975) (total time: 30005ms):
	Trace[911902081]: [30.005916603s] [30.005916603s] END
	E0813 20:57:49.981521       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [b6372d9d7648658f7077421ada6d80fd2a27141edbbd6ee51d78346ef736205d] <==
	* E0813 20:58:20.310855       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 6b95276539722f40f4545af91578505c
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210813205520-393438
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210813205520-393438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=pause-20210813205520-393438
	                    minikube.k8s.io/updated_at=2021_08_13T20_57_02_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 20:56:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210813205520-393438
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 20:58:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 20:57:09 +0000   Fri, 13 Aug 2021 20:56:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 20:57:09 +0000   Fri, 13 Aug 2021 20:56:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 20:57:09 +0000   Fri, 13 Aug 2021 20:56:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 20:57:09 +0000   Fri, 13 Aug 2021 20:57:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.151
	  Hostname:    pause-20210813205520-393438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	System Info:
	  Machine ID:                 77eb9b5f6d424569bb9c035580bd499b
	  System UUID:                77eb9b5f-6d42-4569-bb9c-035580bd499b
	  Boot ID:                    f7c4e7cb-b855-4691-ba67-6445018f8c6d
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-jzmnb                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m10s
	  kube-system                 etcd-pause-20210813205520-393438                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-apiserver-pause-20210813205520-393438             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-controller-manager-pause-20210813205520-393438    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-proxy-mlf5c                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-scheduler-pause-20210813205520-393438             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  2m44s (x8 over 2m45s)  kubelet     Node pause-20210813205520-393438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s (x8 over 2m45s)  kubelet     Node pause-20210813205520-393438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m44s (x7 over 2m45s)  kubelet     Node pause-20210813205520-393438 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m18s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m18s                  kubelet     Node pause-20210813205520-393438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s                  kubelet     Node pause-20210813205520-393438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s                  kubelet     Node pause-20210813205520-393438 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m18s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m17s                  kubelet     Node pause-20210813205520-393438 status is now: NodeReady
	  Normal  Starting                 2m6s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 63s                    kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000017] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.863604] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.032050] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.917916] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1722 comm=systemd-network
	[  +2.669268] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.335717] vboxguest: loading out-of-tree module taints kernel.
	[  +0.008488] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 20:56] systemd-fstab-generator[2101]: Ignoring "noauto" for root device
	[  +0.927578] systemd-fstab-generator[2132]: Ignoring "noauto" for root device
	[  +0.140064] systemd-fstab-generator[2146]: Ignoring "noauto" for root device
	[  +0.195734] systemd-fstab-generator[2179]: Ignoring "noauto" for root device
	[  +8.321149] systemd-fstab-generator[2386]: Ignoring "noauto" for root device
	[Aug13 20:57] systemd-fstab-generator[2823]: Ignoring "noauto" for root device
	[ +16.072552] kauditd_printk_skb: 38 callbacks suppressed
	[ +34.372009] kauditd_printk_skb: 116 callbacks suppressed
	[  +3.958113] NFSD: Unable to end grace period: -110
	[Aug13 20:58] systemd-fstab-generator[3706]: Ignoring "noauto" for root device
	[  +0.206181] systemd-fstab-generator[3719]: Ignoring "noauto" for root device
	[  +0.261980] systemd-fstab-generator[3744]: Ignoring "noauto" for root device
	[ +19.584639] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.482860] systemd-fstab-generator[4981]: Ignoring "noauto" for root device
	[  +0.846439] systemd-fstab-generator[5035]: Ignoring "noauto" for root device
	[Aug13 20:59] systemd-fstab-generator[5622]: Ignoring "noauto" for root device
	[  +0.795991] systemd-fstab-generator[5652]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [afabb5f13041070a9ab9a114ca1f565264d2fc673c1daf47d64669af0aa8c3e5] <==
	* 2021-08-13 20:58:16.461857 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:5" took too long (198.960862ms) to execute
	2021-08-13 20:58:16.462013 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 " with result "range_response_count:0 size:5" took too long (199.025411ms) to execute
	2021-08-13 20:58:16.462116 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (190.42222ms) to execute
	2021-08-13 20:58:16.462337 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (179.184455ms) to execute
	2021-08-13 20:58:16.462702 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (172.711746ms) to execute
	2021-08-13 20:58:16.462925 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (170.528555ms) to execute
	2021-08-13 20:58:16.463221 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (158.293847ms) to execute
	2021-08-13 20:58:16.463747 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (158.490371ms) to execute
	2021-08-13 20:58:16.464124 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 " with result "range_response_count:0 size:5" took too long (152.464331ms) to execute
	2021-08-13 20:58:16.477058 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (151.343452ms) to execute
	2021-08-13 20:58:16.478005 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true " with result "range_response_count:0 size:5" took too long (142.028022ms) to execute
	2021-08-13 20:58:16.478939 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" limit:10000 " with result "range_response_count:0 size:5" took too long (142.259692ms) to execute
	2021-08-13 20:58:16.479721 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 " with result "range_response_count:0 size:5" took too long (129.328346ms) to execute
	2021-08-13 20:58:16.479967 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (126.882803ms) to execute
	2021-08-13 20:58:16.480303 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 " with result "range_response_count:11 size:5977" took too long (116.866258ms) to execute
	2021-08-13 20:58:16.480852 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true " with result "range_response_count:0 size:7" took too long (116.970061ms) to execute
	2021-08-13 20:58:23.354247 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/cluster-admin\" " with result "range_response_count:1 size:718" took too long (1.914180768s) to execute
	2021-08-13 20:58:23.356685 W | etcdserver: request "header:<ID:14244176716868856811 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-20210813205520-393438.169af9452389bd61\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-20210813205520-393438.169af9452389bd61\" value_size:717 lease:5020804680014080881 >> failure:<>>" with result "size:16" took too long (1.23562281s) to execute
	2021-08-13 20:58:23.370142 W | wal: sync duration of 1.250273887s, expected less than 1s
	2021-08-13 20:58:23.370676 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.152835664s) to execute
	2021-08-13 20:58:23.371565 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.728436243s) to execute
	2021-08-13 20:58:23.371769 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.847351028s) to execute
	2021-08-13 20:58:23.378962 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-jzmnb\" " with result "range_response_count:1 size:4862" took too long (671.753147ms) to execute
	2021-08-13 20:58:24.705568 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813205520-393438\" " with result "range_response_count:1 size:4394" took too long (221.501911ms) to execute
	2021-08-13 20:58:26.341296 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [df02c38abac90e1bfb1eaa8433ba9faac330d654e786d0c41901507b55d0c418] <==
	* 2021-08-13 20:56:51.867973 I | embed: serving client requests on 192.168.61.151:2379
	2021-08-13 20:56:51.875825 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 20:57:01.271062 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" " with result "range_response_count:0 size:5" took too long (480.2351ms) to execute
	2021-08-13 20:57:01.272131 W | etcdserver: read-only range request "key:\"/registry/minions/pause-20210813205520-393438\" " with result "range_response_count:1 size:3758" took too long (875.676682ms) to execute
	2021-08-13 20:57:01.273551 W | etcdserver: read-only range request "key:\"/registry/events/default/pause-20210813205520-393438.169af930771f12dc\" " with result "range_response_count:1 size:735" took too long (792.283833ms) to execute
	2021-08-13 20:57:02.171621 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (872.818648ms) to execute
	2021-08-13 20:57:02.172160 W | etcdserver: request "header:<ID:14244176716848216677 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813205520-393438\" mod_revision:222 > success:<request_put:<key:\"/registry/minions/pause-20210813205520-393438\" value_size:3993 >> failure:<request_range:<key:\"/registry/minions/pause-20210813205520-393438\" > >>" with result "size:16" took too long (128.660032ms) to execute
	2021-08-13 20:57:02.172330 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:351" took too long (871.615956ms) to execute
	2021-08-13 20:57:02.172733 W | etcdserver: read-only range request "key:\"/registry/events/default/pause-20210813205520-393438.169af930771f2f58\" " with result "range_response_count:1 size:733" took too long (859.92991ms) to execute
	2021-08-13 20:57:02.172849 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-pause-20210813205520-393438\" " with result "range_response_count:0 size:5" took too long (853.236151ms) to execute
	2021-08-13 20:57:09.290631 W | etcdserver: request "header:<ID:14244176716848216792 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/pause-20210813205520-393438\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/pause-20210813205520-393438\" value_size:3277 >> failure:<>>" with result "size:5" took too long (472.704737ms) to execute
	2021-08-13 20:57:09.291659 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/pause-20210813205520-393438\" " with result "range_response_count:0 size:5" took too long (897.879132ms) to execute
	2021-08-13 20:57:09.298807 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210813205520-393438\" " with result "range_response_count:1 size:4986" took too long (528.421007ms) to execute
	2021-08-13 20:57:09.299124 W | etcdserver: read-only range request "key:\"/registry/csinodes/pause-20210813205520-393438\" " with result "range_response_count:1 size:656" took too long (894.254864ms) to execute
	2021-08-13 20:57:13.314052 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:210" took too long (127.466898ms) to execute
	2021-08-13 20:57:13.314663 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (132.387511ms) to execute
	2021-08-13 20:57:16.343764 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:20.988739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:30.989151 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:39.442816 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:422" took too long (120.094417ms) to execute
	2021-08-13 20:57:40.988900 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:57:50.989064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 20:58:00.244154 W | etcdserver: request "header:<ID:14244176716848217456 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.151\" mod_revision:483 > success:<request_put:<key:\"/registry/masterleases/192.168.61.151\" value_size:69 lease:5020804679993441646 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.151\" > >>" with result "size:16" took too long (162.220853ms) to execute
	2021-08-13 20:58:00.245134 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (881.389444ms) to execute
	2021-08-13 20:58:00.989778 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  20:59:26 up 3 min,  0 users,  load average: 0.98, 0.82, 0.35
	Linux pause-20210813205520-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [1053b5b4ba3abe2d9116c62fc9d2a19e3df3b96688b25fed71f028b7fceffc2c] <==
	* Trace[553017594]: ---"About to write a response" 1919ms (20:58:00.358)
	Trace[553017594]: [1.920866407s] [1.920866407s] END
	I0813 20:58:23.381663       1 trace.go:205] Trace[1143050190]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-jzmnb,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:58:22.699) (total time: 682ms):
	Trace[1143050190]: ---"About to write a response" 681ms (20:58:00.380)
	Trace[1143050190]: [682.310081ms] [682.310081ms] END
	I0813 20:58:25.230359       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 20:58:25.281700       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 20:58:25.373725       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 20:58:25.413105       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 20:58:25.560667       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0813 20:59:15.369992       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:59:15.370169       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:59:15.370213       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0813 20:59:15.548831       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0813 20:59:15.551721       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:59:15.551910       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:59:15.553384       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0813 20:59:16.253189       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0813 20:59:16.253342       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:59:16.254420       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:59:16.258368       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0813 20:59:16.261299       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0813 20:59:16.262291       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0813 20:59:16.288769       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0813 20:59:16.754327       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [11c2753c9a8a79ebfb2fe156a698be51aed9e9d6ac5dfc0af27d0a4822c7d016] <==
	* I0813 20:57:09.309542       1 trace.go:205] Trace[2046907584]: "Create" url:/api/v1/nodes,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.501) (total time: 806ms):
	Trace[2046907584]: [806.482297ms] [806.482297ms] END
	I0813 20:57:09.310802       1 trace.go:205] Trace[146959614]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.771) (total time: 538ms):
	Trace[146959614]: ---"Object stored in database" 538ms (20:57:00.310)
	Trace[146959614]: [538.954794ms] [538.954794ms] END
	I0813 20:57:09.311138       1 trace.go:205] Trace[1128950750]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-20210813205520-393438,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 541ms):
	Trace[1128950750]: ---"About to write a response" 537ms (20:57:00.307)
	Trace[1128950750]: [541.267103ms] [541.267103ms] END
	I0813 20:57:09.311248       1 trace.go:205] Trace[1268223707]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 541ms):
	Trace[1268223707]: ---"Object stored in database" 540ms (20:57:00.310)
	Trace[1268223707]: [541.971563ms] [541.971563ms] END
	I0813 20:57:09.311433       1 trace.go:205] Trace[1977445463]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.772) (total time: 538ms):
	Trace[1977445463]: ---"Object stored in database" 537ms (20:57:00.310)
	Trace[1977445463]: [538.348208ms] [538.348208ms] END
	I0813 20:57:09.321803       1 trace.go:205] Trace[494614999]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.61.151,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 20:57:08.769) (total time: 552ms):
	Trace[494614999]: [552.453895ms] [552.453895ms] END
	I0813 20:57:09.345220       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 20:57:16.259955       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 20:57:16.380865       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 20:57:37.272234       1 client.go:360] parsed scheme: "passthrough"
	I0813 20:57:37.272418       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 20:57:37.272507       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 20:58:00.246413       1 trace.go:205] Trace[1997979141]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (13-Aug-2021 20:57:59.258) (total time: 987ms):
	Trace[1997979141]: ---"Transaction committed" 984ms (20:58:00.246)
	Trace[1997979141]: [987.521712ms] [987.521712ms] END
	
	* 
	* ==> kube-controller-manager [57f3f32f280d8a4cf60a8d8a37811ee7e7b9d9a126e4b37ae17516cb3b3a7849] <==
	* I0813 20:59:16.709660       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 20:59:16.712193       1 shared_informer.go:247] Caches are synced for GC 
	I0813 20:59:16.722572       1 shared_informer.go:247] Caches are synced for node 
	I0813 20:59:16.722685       1 range_allocator.go:172] Starting range CIDR allocator
	I0813 20:59:16.722692       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0813 20:59:16.722698       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0813 20:59:16.728359       1 shared_informer.go:247] Caches are synced for endpoint 
	I0813 20:59:16.729318       1 shared_informer.go:247] Caches are synced for taint 
	I0813 20:59:16.729562       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	I0813 20:59:16.729905       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0813 20:59:16.731175       1 event.go:291] "Event occurred" object="pause-20210813205520-393438" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210813205520-393438 event: Registered Node pause-20210813205520-393438 in Controller"
	W0813 20:59:16.731760       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210813205520-393438. Assuming now as a timestamp.
	I0813 20:59:16.732194       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0813 20:59:16.732423       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:59:16.732906       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0813 20:59:16.747287       1 shared_informer.go:247] Caches are synced for TTL 
	I0813 20:59:16.761010       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:59:16.769847       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:59:16.772854       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:59:16.793140       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0813 20:59:16.811782       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:59:16.811797       1 disruption.go:371] Sending events to api server.
	I0813 20:59:17.205296       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:59:17.277302       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:59:17.277855       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [68bad432830642a2624a04015efd233270944ea918f0f82217367834481cc3a8] <==
	* I0813 20:57:15.593972       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 20:57:15.593991       1 disruption.go:371] Sending events to api server.
	I0813 20:57:15.596695       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 20:57:15.636700       1 shared_informer.go:247] Caches are synced for service account 
	I0813 20:57:15.652896       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 20:57:15.701400       1 shared_informer.go:247] Caches are synced for taint 
	I0813 20:57:15.701628       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0813 20:57:15.701702       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210813205520-393438. Assuming now as a timestamp.
	I0813 20:57:15.701748       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0813 20:57:15.701825       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0813 20:57:15.702024       1 event.go:291] "Event occurred" object="pause-20210813205520-393438" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210813205520-393438 event: Registered Node pause-20210813205520-393438 in Controller"
	I0813 20:57:15.735577       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 20:57:15.751667       1 shared_informer.go:247] Caches are synced for stateful set 
	I0813 20:57:15.767285       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:57:15.796364       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0813 20:57:15.847876       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 20:57:16.199991       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:57:16.200121       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 20:57:16.224599       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 20:57:16.277997       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 20:57:16.457337       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mlf5c"
	I0813 20:57:16.545672       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-fhxw7"
	I0813 20:57:16.596799       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-jzmnb"
	I0813 20:57:16.804186       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 20:57:16.819742       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-fhxw7"
	
	* 
	* ==> kube-proxy [0d1a942c8b8c2548b54ccff6ad310e0bd108d6f335c4e7af29db42dea2d714c5] <==
	* E0813 20:58:20.334846       1 node.go:161] Failed to retrieve node info: nodes "pause-20210813205520-393438" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	I0813 20:58:21.364522       1 node.go:172] Successfully retrieved node IP: 192.168.61.151
	I0813 20:58:21.365223       1 server_others.go:140] Detected node IP 192.168.61.151
	W0813 20:58:21.366125       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:58:23.461362       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:58:23.462248       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:58:23.465333       1 server_others.go:212] Using iptables Proxier.
	I0813 20:58:23.483125       1 server.go:643] Version: v1.21.3
	I0813 20:58:23.488959       1 config.go:315] Starting service config controller
	I0813 20:58:23.490323       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:58:23.490593       1 config.go:224] Starting endpoint slice config controller
	I0813 20:58:23.490606       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:58:23.512424       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:58:23.514744       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:58:23.591163       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:58:23.593313       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [1bba0d6deb03392a9c2a729aa9c03a18c3e1586cd458a1f081392f4b04d0ae62] <==
	* I0813 20:57:20.123665       1 node.go:172] Successfully retrieved node IP: 192.168.61.151
	I0813 20:57:20.123841       1 server_others.go:140] Detected node IP 192.168.61.151
	W0813 20:57:20.123909       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 20:57:20.180054       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 20:57:20.180158       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 20:57:20.180173       1 server_others.go:212] Using iptables Proxier.
	I0813 20:57:20.181825       1 server.go:643] Version: v1.21.3
	I0813 20:57:20.184367       1 config.go:315] Starting service config controller
	I0813 20:57:20.184561       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 20:57:20.184600       1 config.go:224] Starting endpoint slice config controller
	I0813 20:57:20.184604       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 20:57:20.203222       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 20:57:20.207174       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 20:57:20.285130       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 20:57:20.285144       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [1d84b053549cf5e14f9013790cc45e59901f21453bab775d7ab0f7fdccc7958c] <==
	* I0813 20:58:11.830530       1 serving.go:347] Generated self-signed cert in-memory
	W0813 20:58:20.220887       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 20:58:20.224373       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 20:58:20.224624       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 20:58:20.224640       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 20:58:20.341243       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 20:58:20.343223       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:58:20.343608       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 20:58:20.347257       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0813 20:58:20.444874       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0813 20:59:05.413646       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831376       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831398       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831633       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831662       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831677       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831719       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSIStorageCapacity ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831730       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831767       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSINode ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831776       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831804       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicaSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831815       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StatefulSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831834       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PodDisruptionBudget ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0813 20:59:12.831859       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	* 
	* ==> kube-scheduler [698bbea7ce6e9ce2ff33d763621c6d0ae027c7205d816ea431cafc6e045b6889] <==
	* I0813 20:56:57.340096       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 20:56:57.373873       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:56:57.375600       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:56:57.398047       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:56:57.406392       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 20:56:57.418940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.424521       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:56:57.426539       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 20:56:57.426578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 20:56:57.428616       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428717       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428765       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 20:56:57.428811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:57.428854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:56:57.428897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 20:56:58.261670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 20:56:58.311937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 20:56:58.405804       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.463800       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 20:56:58.585826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.615525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 20:56:58.626736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 20:56:58.669986       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 20:56:58.791820       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0813 20:57:01.440271       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 20:55:52 UTC, end at Fri 2021-08-13 20:59:27 UTC. --
	Aug 13 20:59:16 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:16.442997    5630 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.121813    5630 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.123739    5630 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.124150    5630 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.124413    5630 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.124800    5630 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.124979    5630 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.125347    5630 remote_runtime.go:62] parsed scheme: ""
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.125615    5630 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.125862    5630 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.126021    5630 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.126351    5630 remote_image.go:50] parsed scheme: ""
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.126598    5630 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.126791    5630 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.126948    5630 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.127298    5630 kubelet.go:404] "Attempting to sync node with API server"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.127591    5630 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.127844    5630 kubelet.go:283] "Adding apiserver pod source"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.128310    5630 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.153644    5630 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="v1.4.9" apiVersion="v1alpha2"
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: E0813 20:59:21.470849    5630 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 13 20:59:21 pause-20210813205520-393438 kubelet[5630]: I0813 20:59:21.489062    5630 server.go:1190] "Started kubelet"
	Aug 13 20:59:21 pause-20210813205520-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 20:59:21 pause-20210813205520-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [33fae69af6bcf7f1d0806b0e88905d0299d36a6e1bdbe6a0842fdfddca291e81] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 90 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000328290, 0xc000000003)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000328280)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003f0480, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bcc80, 0x18e5530, 0xc0003284c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0004ceee0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004ceee0, 0x18b3d60, 0xc000311f80, 0x1, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004ceee0, 0x3b9aca00, 0x0, 0x17a0501, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0004ceee0, 0x3b9aca00, 0xc00008ad80)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813205520-393438 -n pause-20210813205520-393438
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210813205520-393438 -n pause-20210813205520-393438: exit status 2 (328.045731ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210813205520-393438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210813205520-393438 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210813205520-393438 describe pod : exit status 1 (58.284642ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210813205520-393438 describe pod : exit status 1
--- FAIL: TestPause/serial/PauseAgain (11.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210813210044-393438 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-20210813210044-393438 --alsologtostderr -v=1: exit status 80 (2.613290126s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-20210813210044-393438 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 21:11:53.128210  435085 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:11:53.136973  435085 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:11:53.136991  435085 out.go:311] Setting ErrFile to fd 2...
	I0813 21:11:53.136997  435085 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:11:53.137150  435085 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:11:53.137370  435085 out.go:305] Setting JSON to false
	I0813 21:11:53.137393  435085 mustload.go:65] Loading cluster: no-preload-20210813210044-393438
	I0813 21:11:53.137817  435085 config.go:177] Loaded profile config "no-preload-20210813210044-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:11:53.138360  435085 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:53.138418  435085 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:53.150106  435085 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0813 21:11:53.150500  435085 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:53.151085  435085 main.go:130] libmachine: Using API Version  1
	I0813 21:11:53.151114  435085 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:53.151524  435085 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:53.151708  435085 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:11:53.154999  435085 host.go:66] Checking if "no-preload-20210813210044-393438" exists ...
	I0813 21:11:53.155403  435085 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:53.155453  435085 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:53.165822  435085 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33109
	I0813 21:11:53.166179  435085 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:53.166619  435085 main.go:130] libmachine: Using API Version  1
	I0813 21:11:53.166640  435085 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:53.166965  435085 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:53.167161  435085 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:11:53.167837  435085 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-20210813210044-393438 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 21:11:53.170128  435085 out.go:177] * Pausing node no-preload-20210813210044-393438 ... 
	I0813 21:11:53.170161  435085 host.go:66] Checking if "no-preload-20210813210044-393438" exists ...
	I0813 21:11:53.170563  435085 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:53.170607  435085 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:53.181895  435085 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45699
	I0813 21:11:53.182320  435085 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:53.182807  435085 main.go:130] libmachine: Using API Version  1
	I0813 21:11:53.182835  435085 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:53.183215  435085 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:53.183404  435085 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:11:53.183622  435085 ssh_runner.go:149] Run: systemctl --version
	I0813 21:11:53.183648  435085 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:11:53.190071  435085 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:53.190486  435085 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:11:53.190516  435085 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:53.190717  435085 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:11:53.190910  435085 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:11:53.191077  435085 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:11:53.191222  435085 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:11:53.303066  435085 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:53.315872  435085 pause.go:50] kubelet running: true
	I0813 21:11:53.315937  435085 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:11:53.603830  435085 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:11:53.603911  435085 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:11:53.740722  435085 cri.go:76] found id: "d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922"
	I0813 21:11:53.740754  435085 cri.go:76] found id: "d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1"
	I0813 21:11:53.740761  435085 cri.go:76] found id: "9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313"
	I0813 21:11:53.740785  435085 cri.go:76] found id: "cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075"
	I0813 21:11:53.740791  435085 cri.go:76] found id: "aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e"
	I0813 21:11:53.740797  435085 cri.go:76] found id: "0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f"
	I0813 21:11:53.740802  435085 cri.go:76] found id: "d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70"
	I0813 21:11:53.740808  435085 cri.go:76] found id: "c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64"
	I0813 21:11:53.740813  435085 cri.go:76] found id: "7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83"
	I0813 21:11:53.740823  435085 cri.go:76] found id: ""
	I0813 21:11:53.740916  435085 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:11:53.802482  435085 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f","pid":4782,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f/rootfs","created":"2021-08-13T21:11:04.263792672Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a","pid":5719,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a","rootfs":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a/rootfs","created":"2021-08-13T21:11:32.221001949Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-7z8h9_5e8a9f2d-6d0e-49b6-a7ce-a5cc9b3ff075"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34","pid":4673,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34/rootfs","created":"2021-08-13T21:11:03.334361917Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad
73d5ad72ab641e2d34","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210813210044-393438_e4e1034d86c85528fc2683beffec2e7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce","pid":5392,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce/rootfs","created":"2021-08-13T21:11:28.741845914Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-r4dmk_0549f087-6804-403a-91ac-46ea3176692a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83","pid"
:6078,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83/rootfs","created":"2021-08-13T21:11:35.319475018Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5","pid":5990,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5/rootfs","created":"2021-08-13T21:11:33.863917847Z","annotations":{"io.k
ubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-29b2r_42ed3d11-7b24-4788-8823-852e5b2ca9ea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8","pid":4667,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8/rootfs","created":"2021-08-13T21:11:03.305162529Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210813210044-393438_3b6ae
bf51f06d22f1277fd0d24410aad"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313","pid":5357,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313/rootfs","created":"2021-08-13T21:11:28.614804755Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a","pid":5736,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.
io/a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a/rootfs","created":"2021-08-13T21:11:32.44455056Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_7f18b572-6c04-49c7-96fb-5a2371bb3c87"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e","pid":4806,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e/rootfs","created":"2021-08-13T21:11:04.349840757Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2cd725a5ec9f8dadca5c661713ac55d4
733e3c1f69cdad73d5ad72ab641e2d34"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14","pid":5981,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14/rootfs","created":"2021-08-13T21:11:33.83503426Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-kbbhs_9eaa843a-02b4-4271-b662-874e5c0d8978"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075","pid":4830,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf1afa08fe13b
a8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075/rootfs","created":"2021-08-13T21:11:04.605657354Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922","pid":5811,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922/rootfs","created":"2021-08-13T21:11:33.18543693Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container",
"io.kubernetes.cri.sandbox-id":"a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70","pid":4759,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70/rootfs","created":"2021-08-13T21:11:04.157337146Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb","pid":4685,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de63179805
32188ee58cb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb/rootfs","created":"2021-08-13T21:11:03.343349422Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210813210044-393438_2286d6fd269f175f440c97e4f14c55e4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1","pid":5559,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1/rootfs","created":"2021-08-13T21:11:30.101293973Z","annotations":{"io.kubernetes.cri.container-name":"coredns","
io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843","pid":5184,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843/rootfs","created":"2021-08-13T21:11:27.908851257Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-2k9qh_22a31bb3-8b54-429b-9161-471a84001351"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5","pid":4711,"status":"running",
"bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5/rootfs","created":"2021-08-13T21:11:03.401451887Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210813210044-393438_e0a4ae9891c4672892764c21799aff47"},"owner":"root"}]
	I0813 21:11:53.802791  435085 cri.go:113] list returned 18 containers
	I0813 21:11:53.802814  435085 cri.go:116] container: {ID:0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f Status:running}
	I0813 21:11:53.802831  435085 cri.go:116] container: {ID:14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a Status:running}
	I0813 21:11:53.802840  435085 cri.go:118] skipping 14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a - not in ps
	I0813 21:11:53.802853  435085 cri.go:116] container: {ID:2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34 Status:running}
	I0813 21:11:53.802864  435085 cri.go:118] skipping 2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34 - not in ps
	I0813 21:11:53.802869  435085 cri.go:116] container: {ID:3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce Status:running}
	I0813 21:11:53.802880  435085 cri.go:118] skipping 3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce - not in ps
	I0813 21:11:53.802886  435085 cri.go:116] container: {ID:7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83 Status:running}
	I0813 21:11:53.802892  435085 cri.go:116] container: {ID:81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5 Status:running}
	I0813 21:11:53.802899  435085 cri.go:118] skipping 81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5 - not in ps
	I0813 21:11:53.802906  435085 cri.go:116] container: {ID:9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8 Status:running}
	I0813 21:11:53.802913  435085 cri.go:118] skipping 9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8 - not in ps
	I0813 21:11:53.802919  435085 cri.go:116] container: {ID:9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313 Status:running}
	I0813 21:11:53.802928  435085 cri.go:116] container: {ID:a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a Status:running}
	I0813 21:11:53.802935  435085 cri.go:118] skipping a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a - not in ps
	I0813 21:11:53.802950  435085 cri.go:116] container: {ID:aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e Status:running}
	I0813 21:11:53.802958  435085 cri.go:116] container: {ID:c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14 Status:running}
	I0813 21:11:53.802968  435085 cri.go:118] skipping c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14 - not in ps
	I0813 21:11:53.802973  435085 cri.go:116] container: {ID:cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075 Status:running}
	I0813 21:11:53.802982  435085 cri.go:116] container: {ID:d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922 Status:running}
	I0813 21:11:53.802989  435085 cri.go:116] container: {ID:d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70 Status:running}
	I0813 21:11:53.802996  435085 cri.go:116] container: {ID:d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb Status:running}
	I0813 21:11:53.803003  435085 cri.go:118] skipping d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb - not in ps
	I0813 21:11:53.803009  435085 cri.go:116] container: {ID:d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1 Status:running}
	I0813 21:11:53.803016  435085 cri.go:116] container: {ID:dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843 Status:running}
	I0813 21:11:53.803023  435085 cri.go:118] skipping dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843 - not in ps
	I0813 21:11:53.803028  435085 cri.go:116] container: {ID:df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5 Status:running}
	I0813 21:11:53.803034  435085 cri.go:118] skipping df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5 - not in ps
	I0813 21:11:53.803087  435085 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f
	I0813 21:11:53.828299  435085 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f 7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83
	I0813 21:11:53.858142  435085 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f 7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:11:53Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:11:54.134584  435085 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:54.148528  435085 pause.go:50] kubelet running: false
	I0813 21:11:54.148592  435085 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:11:54.372344  435085 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:11:54.372443  435085 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:11:54.532559  435085 cri.go:76] found id: "d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922"
	I0813 21:11:54.532593  435085 cri.go:76] found id: "d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1"
	I0813 21:11:54.532601  435085 cri.go:76] found id: "9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313"
	I0813 21:11:54.532608  435085 cri.go:76] found id: "cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075"
	I0813 21:11:54.532613  435085 cri.go:76] found id: "aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e"
	I0813 21:11:54.532619  435085 cri.go:76] found id: "0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f"
	I0813 21:11:54.532624  435085 cri.go:76] found id: "d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70"
	I0813 21:11:54.532630  435085 cri.go:76] found id: "c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64"
	I0813 21:11:54.532636  435085 cri.go:76] found id: "7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83"
	I0813 21:11:54.532647  435085 cri.go:76] found id: ""
	I0813 21:11:54.532706  435085 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:11:54.600639  435085 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f","pid":4782,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f/rootfs","created":"2021-08-13T21:11:04.263792672Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a","pid":5719,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a","rootfs":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a/rootfs","created":"2021-08-13T21:11:32.221001949Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-7z8h9_5e8a9f2d-6d0e-49b6-a7ce-a5cc9b3ff075"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34","pid":4673,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34/rootfs","created":"2021-08-13T21:11:03.334361917Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad7
3d5ad72ab641e2d34","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210813210044-393438_e4e1034d86c85528fc2683beffec2e7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce","pid":5392,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce/rootfs","created":"2021-08-13T21:11:28.741845914Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-r4dmk_0549f087-6804-403a-91ac-46ea3176692a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83","pid":
6078,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83/rootfs","created":"2021-08-13T21:11:35.319475018Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5","pid":5990,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5/rootfs","created":"2021-08-13T21:11:33.863917847Z","annotations":{"io.ku
bernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-29b2r_42ed3d11-7b24-4788-8823-852e5b2ca9ea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8","pid":4667,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8/rootfs","created":"2021-08-13T21:11:03.305162529Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210813210044-393438_3b6aeb
f51f06d22f1277fd0d24410aad"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313","pid":5357,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313/rootfs","created":"2021-08-13T21:11:28.614804755Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a","pid":5736,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.i
o/a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a/rootfs","created":"2021-08-13T21:11:32.44455056Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_7f18b572-6c04-49c7-96fb-5a2371bb3c87"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e","pid":4806,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e/rootfs","created":"2021-08-13T21:11:04.349840757Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2cd725a5ec9f8dadca5c661713ac55d47
33e3c1f69cdad73d5ad72ab641e2d34"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14","pid":5981,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14/rootfs","created":"2021-08-13T21:11:33.83503426Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-kbbhs_9eaa843a-02b4-4271-b662-874e5c0d8978"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075","pid":4830,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf1afa08fe13ba
8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075/rootfs","created":"2021-08-13T21:11:04.605657354Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922","pid":5811,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922/rootfs","created":"2021-08-13T21:11:33.18543693Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","
io.kubernetes.cri.sandbox-id":"a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70","pid":4759,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70/rootfs","created":"2021-08-13T21:11:04.157337146Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb","pid":4685,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de631798053
2188ee58cb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb/rootfs","created":"2021-08-13T21:11:03.343349422Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210813210044-393438_2286d6fd269f175f440c97e4f14c55e4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1","pid":5559,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1/rootfs","created":"2021-08-13T21:11:30.101293973Z","annotations":{"io.kubernetes.cri.container-name":"coredns","i
o.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843","pid":5184,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843/rootfs","created":"2021-08-13T21:11:27.908851257Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-2k9qh_22a31bb3-8b54-429b-9161-471a84001351"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5","pid":4711,"status":"running","
bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5/rootfs","created":"2021-08-13T21:11:03.401451887Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210813210044-393438_e0a4ae9891c4672892764c21799aff47"},"owner":"root"}]
	I0813 21:11:54.600938  435085 cri.go:113] list returned 18 containers
	I0813 21:11:54.600959  435085 cri.go:116] container: {ID:0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f Status:paused}
	I0813 21:11:54.600975  435085 cri.go:122] skipping {0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f paused}: state = "paused", want "running"
	I0813 21:11:54.600994  435085 cri.go:116] container: {ID:14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a Status:running}
	I0813 21:11:54.601003  435085 cri.go:118] skipping 14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a - not in ps
	I0813 21:11:54.601012  435085 cri.go:116] container: {ID:2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34 Status:running}
	I0813 21:11:54.601021  435085 cri.go:118] skipping 2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34 - not in ps
	I0813 21:11:54.601031  435085 cri.go:116] container: {ID:3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce Status:running}
	I0813 21:11:54.601039  435085 cri.go:118] skipping 3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce - not in ps
	I0813 21:11:54.601048  435085 cri.go:116] container: {ID:7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83 Status:running}
	I0813 21:11:54.601053  435085 cri.go:116] container: {ID:81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5 Status:running}
	I0813 21:11:54.601058  435085 cri.go:118] skipping 81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5 - not in ps
	I0813 21:11:54.601062  435085 cri.go:116] container: {ID:9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8 Status:running}
	I0813 21:11:54.601067  435085 cri.go:118] skipping 9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8 - not in ps
	I0813 21:11:54.601070  435085 cri.go:116] container: {ID:9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313 Status:running}
	I0813 21:11:54.601075  435085 cri.go:116] container: {ID:a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a Status:running}
	I0813 21:11:54.601081  435085 cri.go:118] skipping a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a - not in ps
	I0813 21:11:54.601085  435085 cri.go:116] container: {ID:aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e Status:running}
	I0813 21:11:54.601094  435085 cri.go:116] container: {ID:c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14 Status:running}
	I0813 21:11:54.601103  435085 cri.go:118] skipping c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14 - not in ps
	I0813 21:11:54.601106  435085 cri.go:116] container: {ID:cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075 Status:running}
	I0813 21:11:54.601111  435085 cri.go:116] container: {ID:d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922 Status:running}
	I0813 21:11:54.601115  435085 cri.go:116] container: {ID:d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70 Status:running}
	I0813 21:11:54.601119  435085 cri.go:116] container: {ID:d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb Status:running}
	I0813 21:11:54.601124  435085 cri.go:118] skipping d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb - not in ps
	I0813 21:11:54.601128  435085 cri.go:116] container: {ID:d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1 Status:running}
	I0813 21:11:54.601132  435085 cri.go:116] container: {ID:dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843 Status:running}
	I0813 21:11:54.601137  435085 cri.go:118] skipping dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843 - not in ps
	I0813 21:11:54.601140  435085 cri.go:116] container: {ID:df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5 Status:running}
	I0813 21:11:54.601145  435085 cri.go:118] skipping df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5 - not in ps
	I0813 21:11:54.601193  435085 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83
	I0813 21:11:54.642790  435085 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83 9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313
	I0813 21:11:54.668417  435085 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83 9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:11:54Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:11:55.209210  435085 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:55.226307  435085 pause.go:50] kubelet running: false
	I0813 21:11:55.226381  435085 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:11:55.416338  435085 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:11:55.416426  435085 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:11:55.549840  435085 cri.go:76] found id: "d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922"
	I0813 21:11:55.549878  435085 cri.go:76] found id: "d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1"
	I0813 21:11:55.549885  435085 cri.go:76] found id: "9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313"
	I0813 21:11:55.549892  435085 cri.go:76] found id: "cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075"
	I0813 21:11:55.549900  435085 cri.go:76] found id: "aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e"
	I0813 21:11:55.549906  435085 cri.go:76] found id: "0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f"
	I0813 21:11:55.549912  435085 cri.go:76] found id: "d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70"
	I0813 21:11:55.549916  435085 cri.go:76] found id: "c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64"
	I0813 21:11:55.549922  435085 cri.go:76] found id: "7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83"
	I0813 21:11:55.549932  435085 cri.go:76] found id: ""
	I0813 21:11:55.549984  435085 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:11:55.609406  435085 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f","pid":4782,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f/rootfs","created":"2021-08-13T21:11:04.263792672Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a","pid":5719,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a","rootfs":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a/rootfs","created":"2021-08-13T21:11:32.221001949Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-7z8h9_5e8a9f2d-6d0e-49b6-a7ce-a5cc9b3ff075"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34","pid":4673,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34/rootfs","created":"2021-08-13T21:11:03.334361917Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad7
3d5ad72ab641e2d34","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210813210044-393438_e4e1034d86c85528fc2683beffec2e7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce","pid":5392,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce/rootfs","created":"2021-08-13T21:11:28.741845914Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-r4dmk_0549f087-6804-403a-91ac-46ea3176692a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83","pid":
6078,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83/rootfs","created":"2021-08-13T21:11:35.319475018Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5","pid":5990,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5/rootfs","created":"2021-08-13T21:11:33.863917847Z","annotations":{"io.kub
ernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-29b2r_42ed3d11-7b24-4788-8823-852e5b2ca9ea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8","pid":4667,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8/rootfs","created":"2021-08-13T21:11:03.305162529Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210813210044-393438_3b6aebf
51f06d22f1277fd0d24410aad"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313","pid":5357,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313/rootfs","created":"2021-08-13T21:11:28.614804755Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a","pid":5736,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io
/a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a/rootfs","created":"2021-08-13T21:11:32.44455056Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_7f18b572-6c04-49c7-96fb-5a2371bb3c87"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e","pid":4806,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e/rootfs","created":"2021-08-13T21:11:04.349840757Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2cd725a5ec9f8dadca5c661713ac55d473
3e3c1f69cdad73d5ad72ab641e2d34"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14","pid":5981,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14/rootfs","created":"2021-08-13T21:11:33.83503426Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-kbbhs_9eaa843a-02b4-4271-b662-874e5c0d8978"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075","pid":4830,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf1afa08fe13ba8
a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075/rootfs","created":"2021-08-13T21:11:04.605657354Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922","pid":5811,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922/rootfs","created":"2021-08-13T21:11:33.18543693Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","i
o.kubernetes.cri.sandbox-id":"a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70","pid":4759,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70/rootfs","created":"2021-08-13T21:11:04.157337146Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb","pid":4685,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532
188ee58cb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb/rootfs","created":"2021-08-13T21:11:03.343349422Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210813210044-393438_2286d6fd269f175f440c97e4f14c55e4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1","pid":5559,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1/rootfs","created":"2021-08-13T21:11:30.101293973Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io
.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843","pid":5184,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843/rootfs","created":"2021-08-13T21:11:27.908851257Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-2k9qh_22a31bb3-8b54-429b-9161-471a84001351"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5","pid":4711,"status":"running","b
undle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5/rootfs","created":"2021-08-13T21:11:03.401451887Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210813210044-393438_e0a4ae9891c4672892764c21799aff47"},"owner":"root"}]
	I0813 21:11:55.609652  435085 cri.go:113] list returned 18 containers
	I0813 21:11:55.609667  435085 cri.go:116] container: {ID:0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f Status:paused}
	I0813 21:11:55.609681  435085 cri.go:122] skipping {0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f paused}: state = "paused", want "running"
	I0813 21:11:55.609698  435085 cri.go:116] container: {ID:14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a Status:running}
	I0813 21:11:55.609706  435085 cri.go:118] skipping 14669b6c57b2dd0bb557aecd94cb0e5e8b2fa1dbe1712ce968cb9081b188d56a - not in ps
	I0813 21:11:55.609713  435085 cri.go:116] container: {ID:2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34 Status:running}
	I0813 21:11:55.609721  435085 cri.go:118] skipping 2cd725a5ec9f8dadca5c661713ac55d4733e3c1f69cdad73d5ad72ab641e2d34 - not in ps
	I0813 21:11:55.609727  435085 cri.go:116] container: {ID:3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce Status:running}
	I0813 21:11:55.609742  435085 cri.go:118] skipping 3f4a9fcf554b7c927dad4a1f44bcc6bf18741fd7475d20943f9540774cd1d9ce - not in ps
	I0813 21:11:55.609748  435085 cri.go:116] container: {ID:7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83 Status:paused}
	I0813 21:11:55.609759  435085 cri.go:122] skipping {7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83 paused}: state = "paused", want "running"
	I0813 21:11:55.609767  435085 cri.go:116] container: {ID:81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5 Status:running}
	I0813 21:11:55.609774  435085 cri.go:118] skipping 81a88eed574b3c7c77f7caaac70ac5eef4892f1263c0cedaf15cde5167bdebc5 - not in ps
	I0813 21:11:55.609780  435085 cri.go:116] container: {ID:9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8 Status:running}
	I0813 21:11:55.609787  435085 cri.go:118] skipping 9ac1643b6bb6b0d7de0deb7c25b5f09cf4afa73157426fac562413c496e9bad8 - not in ps
	I0813 21:11:55.609792  435085 cri.go:116] container: {ID:9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313 Status:running}
	I0813 21:11:55.609799  435085 cri.go:116] container: {ID:a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a Status:running}
	I0813 21:11:55.609806  435085 cri.go:118] skipping a51e1b05ddab9d6d41b4af9cb44a8499f2f1687c179b916378faaaf0e3277c6a - not in ps
	I0813 21:11:55.609812  435085 cri.go:116] container: {ID:aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e Status:running}
	I0813 21:11:55.609822  435085 cri.go:116] container: {ID:c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14 Status:running}
	I0813 21:11:55.609832  435085 cri.go:118] skipping c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14 - not in ps
	I0813 21:11:55.609841  435085 cri.go:116] container: {ID:cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075 Status:running}
	I0813 21:11:55.609852  435085 cri.go:116] container: {ID:d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922 Status:running}
	I0813 21:11:55.609859  435085 cri.go:116] container: {ID:d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70 Status:running}
	I0813 21:11:55.609869  435085 cri.go:116] container: {ID:d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb Status:running}
	I0813 21:11:55.609877  435085 cri.go:118] skipping d5e3ceb90e01322b39f123d7d70a7413d701bc917d9de6317980532188ee58cb - not in ps
	I0813 21:11:55.609883  435085 cri.go:116] container: {ID:d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1 Status:running}
	I0813 21:11:55.609893  435085 cri.go:116] container: {ID:dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843 Status:running}
	I0813 21:11:55.609943  435085 cri.go:118] skipping dd71ffcab16c207981a0f0f72f53ce2afb675368ee529dacd2ff086000534843 - not in ps
	I0813 21:11:55.609956  435085 cri.go:116] container: {ID:df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5 Status:running}
	I0813 21:11:55.609965  435085 cri.go:118] skipping df6cafa1ea4bc110de9250391a3978beeace03e77496a47a85ca380b2edf6cc5 - not in ps
	I0813 21:11:55.610016  435085 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313
	I0813 21:11:55.638437  435085 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313 aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e
	I0813 21:11:55.666015  435085 out.go:177] 
	W0813 21:11:55.666171  435085 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313 aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:11:55Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313 aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:11:55Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 21:11:55.666192  435085 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 21:11:55.682973  435085 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 21:11:55.684678  435085 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p no-preload-20210813210044-393438 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813210044-393438 -n no-preload-20210813210044-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813210044-393438 -n no-preload-20210813210044-393438: exit status 2 (286.984273ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210813210044-393438 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p no-preload-20210813210044-393438 logs -n 25: (1.565963308s)
helpers_test.go:253: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | cert-options-20210813205929-393438               | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:44 UTC | Fri, 13 Aug 2021 21:00:44 UTC |
	|         | cert-options-20210813205929-393438                |                                                  |         |         |                               |                               |
	| start   | -p                                                | force-systemd-flag-20210813205929-393438         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:29 UTC | Fri, 13 Aug 2021 21:01:13 UTC |
	|         | force-systemd-flag-20210813205929-393438          |                                                  |         |         |                               |                               |
	|         | --memory=2048 --force-systemd                     |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=kvm2              |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	| -p      | force-systemd-flag-20210813205929-393438          | force-systemd-flag-20210813205929-393438         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:13 UTC | Fri, 13 Aug 2021 21:01:13 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                                  |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-flag-20210813205929-393438         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:13 UTC | Fri, 13 Aug 2021 21:01:15 UTC |
	|         | force-systemd-flag-20210813205929-393438          |                                                  |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210813205735-393438         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:38 UTC | Fri, 13 Aug 2021 21:01:20 UTC |
	|         | kubernetes-upgrade-20210813205735-393438          |                                                  |         |         |                               |                               |
	|         | --memory=2200                                     |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2              |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210813205735-393438         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:20 UTC | Fri, 13 Aug 2021 21:01:21 UTC |
	|         | kubernetes-upgrade-20210813205735-393438          |                                                  |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210813210121-393438      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:21 UTC | Fri, 13 Aug 2021 21:01:21 UTC |
	|         | disable-driver-mounts-20210813210121-393438       |                                                  |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:53 UTC | Fri, 13 Aug 2021 21:02:44 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:53 UTC | Fri, 13 Aug 2021 21:02:54 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:21 UTC | Fri, 13 Aug 2021 21:03:08 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:44 UTC | Fri, 13 Aug 2021 21:03:16 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:17 UTC | Fri, 13 Aug 2021 21:03:18 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:15 UTC | Fri, 13 Aug 2021 21:03:20 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:28 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:29 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:54 UTC | Fri, 13 Aug 2021 21:04:26 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:04:27 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:18 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:28 UTC | Fri, 13 Aug 2021 21:05:01 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:52 UTC | Fri, 13 Aug 2021 21:11:53 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:05:02
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:05:02.888018  434502 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:05:02.888239  434502 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:05:02.888250  434502 out.go:311] Setting ErrFile to fd 2...
	I0813 21:05:02.888254  434502 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:05:02.888376  434502 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:05:02.888592  434502 out.go:305] Setting JSON to false
	I0813 21:05:02.924177  434502 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6465,"bootTime":1628882238,"procs":180,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:05:02.924281  434502 start.go:121] virtualization: kvm guest
	I0813 21:05:02.926625  434502 out.go:177] * [embed-certs-20210813210115-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:05:02.928076  434502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:05:02.926775  434502 notify.go:169] Checking for updates...
	I0813 21:05:02.929450  434502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:05:02.930769  434502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:05:02.932110  434502 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:05:02.932613  434502 config.go:177] Loaded profile config "embed-certs-20210813210115-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:05:02.933226  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:05:02.933308  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:05:02.943630  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0813 21:05:02.944019  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:05:02.944692  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:05:02.944721  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:05:02.945088  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:05:02.945271  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:02.945440  434502 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:05:02.945896  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:05:02.945940  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:05:02.957603  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0813 21:05:02.957986  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:05:02.958515  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:05:02.958538  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:05:02.958876  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:05:02.959058  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:02.987833  434502 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 21:05:02.987853  434502 start.go:278] selected driver: kvm2
	I0813 21:05:02.987859  434502 start.go:751] validating driver "kvm2" against &{Name:embed-certs-20210813210115-393438 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.21.3 ClusterName:embed-certs-20210813210115-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true sys
tem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:05:02.987944  434502 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:05:02.988838  434502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.988993  434502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:05:02.998693  434502 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:05:02.999041  434502 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 21:05:02.999069  434502 cni.go:93] Creating CNI manager for ""
	I0813 21:05:02.999079  434502 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:05:02.999087  434502 start_flags.go:277] config:
	{Name:embed-certs-20210813210115-393438 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813210115-393438 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:05:02.999193  434502 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.318771  434426 out.go:177] * Starting control plane node no-preload-20210813210044-393438 in cluster no-preload-20210813210044-393438
	I0813 21:05:02.318800  434426 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:05:02.318965  434426 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/config.json ...
	I0813 21:05:02.319196  434426 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:05:02.319251  434426 start.go:313] acquiring machines lock for no-preload-20210813210044-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:05:02.319528  434426 cache.go:108] acquiring lock: {Name:mkaf60fb03fc48f620204835a8c2e58ac4285be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319558  434426 cache.go:108] acquiring lock: {Name:mke39e3353eb75c75254f6351f63129b8eccdaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319568  434426 cache.go:108] acquiring lock: {Name:mke82dad524ab7543f06ba80a46c31462e90eaf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319562  434426 cache.go:108] acquiring lock: {Name:mk5ae4dca388ede54efe3f0a495fa4d7f638ce4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319651  434426 cache.go:108] acquiring lock: {Name:mk920d9e9f29ba2b1781316e9067fe8a78e86bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319694  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0813 21:05:02.319727  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0813 21:05:02.319731  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0813 21:05:02.319733  434426 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 185.868µs
	I0813 21:05:02.319748  434426 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0813 21:05:02.319711  434426 cache.go:108] acquiring lock: {Name:mkcfd106f227ad483e6a4cbb38d06a5e17fb84c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319645  434426 cache.go:108] acquiring lock: {Name:mk6fe844ec73ef4a411cd1ad882359df79d1727f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319776  434426 cache.go:108] acquiring lock: {Name:mke4c28e30686341fb8f0ce651a18ccb674aa951 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319774  434426 cache.go:108] acquiring lock: {Name:mk87a11a146365014f21d5bffcf66f3437c38926 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319780  434426 cache.go:108] acquiring lock: {Name:mkc72f69bc91cc506098b4e6b602bd9bf210acef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319751  434426 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 206.009µs
	I0813 21:05:02.319694  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 21:05:02.319815  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 21:05:02.319831  434426 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 123.7µs
	I0813 21:05:02.319832  434426 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 273.68µs
	I0813 21:05:02.319842  434426 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 21:05:02.319758  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 21:05:02.319846  434426 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 21:05:02.319814  434426 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0813 21:05:02.319742  434426 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 94.501µs
	I0813 21:05:02.319858  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0813 21:05:02.319860  434426 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0813 21:05:02.319861  434426 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 352.713µs
	I0813 21:05:02.319869  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0813 21:05:02.319876  434426 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 21:05:02.319871  434426 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 99.695µs
	I0813 21:05:02.319884  434426 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0813 21:05:02.319868  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0813 21:05:02.319890  434426 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 261.55µs
	I0813 21:05:02.319900  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0813 21:05:02.319906  434426 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0813 21:05:02.319904  434426 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 127.312µs
	I0813 21:05:02.319914  434426 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0813 21:05:02.319914  434426 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 140.766µs
	I0813 21:05:02.319929  434426 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0813 21:05:02.319937  434426 cache.go:88] Successfully saved all images to host disk.
	I0813 21:05:06.432932  434426 start.go:317] acquired machines lock for "no-preload-20210813210044-393438" in 4.113653507s
	I0813 21:05:06.432981  434426 start.go:93] Skipping create...Using existing machine configuration
	I0813 21:05:06.432988  434426 fix.go:55] fixHost starting: 
	I0813 21:05:06.433427  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:05:06.433480  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:05:06.447081  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0813 21:05:06.447544  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:05:06.448090  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:05:06.448115  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:05:06.448448  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:05:06.448643  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:06.448817  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:05:06.451652  434426 fix.go:108] recreateIfNeeded on no-preload-20210813210044-393438: state=Stopped err=<nil>
	I0813 21:05:06.451684  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	W0813 21:05:06.451840  434426 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 21:05:04.382080  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | Getting to WaitForSSH function...
	I0813 21:05:04.387091  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:04.387537  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:04.387577  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:04.387769  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | Using SSH client type: external
	I0813 21:05:04.387816  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa (-rw-------)
	I0813 21:05:04.387876  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:05:04.387899  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | About to run SSH command:
	I0813 21:05:04.387915  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | exit 0
	I0813 21:05:05.526119  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 21:05:05.526446  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetConfigRaw
	I0813 21:05:05.527199  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetIP
	I0813 21:05:05.532260  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.532572  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:05.532606  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.532871  434236 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/config.json ...
	I0813 21:05:05.533085  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:05.533269  434236 machine.go:88] provisioning docker machine ...
	I0813 21:05:05.533290  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:05.533480  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetMachineName
	I0813 21:05:05.533630  434236 buildroot.go:166] provisioning hostname "default-k8s-different-port-20210813210121-393438"
	I0813 21:05:05.533648  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetMachineName
	I0813 21:05:05.533772  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:05.538168  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.538470  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:05.538506  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.538587  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:05.538780  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:05.538935  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:05.539128  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:05.539270  434236 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:05.539473  434236 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0813 21:05:05.539489  434236 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210813210121-393438 && echo "default-k8s-different-port-20210813210121-393438" | sudo tee /etc/hostname
	I0813 21:05:05.673970  434236 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210813210121-393438
	
	I0813 21:05:05.673998  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:05.678812  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.679112  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:05.679156  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.679296  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:05.679467  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:05.679625  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:05.679755  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:05.679951  434236 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:05.680117  434236 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0813 21:05:05.680142  434236 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210813210121-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210813210121-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210813210121-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:05:05.817552  434236 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:05:05.817583  434236 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:05:05.817602  434236 buildroot.go:174] setting up certificates
	I0813 21:05:05.817613  434236 provision.go:83] configureAuth start
	I0813 21:05:05.817630  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetMachineName
	I0813 21:05:05.817886  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetIP
	I0813 21:05:05.823752  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.824099  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:05.824139  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.824244  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:05.829042  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.829362  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:05.829397  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.829488  434236 provision.go:138] copyHostCerts
	I0813 21:05:05.829565  434236 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:05:05.829579  434236 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:05:05.829627  434236 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 21:05:05.829738  434236 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:05:05.829747  434236 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:05:05.829771  434236 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:05:05.829833  434236 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:05:05.829845  434236 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:05:05.829859  434236 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:05:05.829910  434236 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210813210121-393438 san=[192.168.39.163 192.168.39.163 localhost 127.0.0.1 minikube default-k8s-different-port-20210813210121-393438]
	I0813 21:05:06.014962  434236 provision.go:172] copyRemoteCerts
	I0813 21:05:06.015027  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:05:06.015075  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:06.020059  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.020442  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.020483  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.020603  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:06.020788  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.020948  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:06.021074  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:05:06.114984  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:05:06.132635  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1314 bytes)
	I0813 21:05:06.149756  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 21:05:06.167156  434236 provision.go:86] duration metric: configureAuth took 349.531077ms
	I0813 21:05:06.167173  434236 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:05:06.167307  434236 config.go:177] Loaded profile config "default-k8s-different-port-20210813210121-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:05:06.167319  434236 machine.go:91] provisioned docker machine in 634.036261ms
	I0813 21:05:06.167325  434236 start.go:267] post-start starting for "default-k8s-different-port-20210813210121-393438" (driver="kvm2")
	I0813 21:05:06.167331  434236 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:05:06.167350  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.167606  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:05:06.167647  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:06.172554  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.172895  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.172927  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.173085  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:06.173238  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.173380  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:06.173532  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:05:06.266295  434236 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:05:06.271058  434236 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:05:06.271077  434236 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:05:06.271129  434236 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:05:06.271224  434236 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 21:05:06.271343  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:05:06.278509  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:06.295512  434236 start.go:270] post-start completed in 128.173359ms
	I0813 21:05:06.295544  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.295768  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:06.300966  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.301296  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.301340  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.301419  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:06.301649  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.301848  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.302008  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:06.302196  434236 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:06.302368  434236 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0813 21:05:06.302380  434236 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:05:06.432728  434236 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888706.367503924
	
	I0813 21:05:06.432751  434236 fix.go:212] guest clock: 1628888706.367503924
	I0813 21:05:06.432761  434236 fix.go:225] Guest: 2021-08-13 21:05:06.367503924 +0000 UTC Remote: 2021-08-13 21:05:06.295750146 +0000 UTC m=+14.752103869 (delta=71.753778ms)
	I0813 21:05:06.432783  434236 fix.go:196] guest clock delta is within tolerance: 71.753778ms
	I0813 21:05:06.432791  434236 fix.go:57] fixHost completed within 14.715695122s
	I0813 21:05:06.432798  434236 start.go:80] releasing machines lock for "default-k8s-different-port-20210813210121-393438", held for 14.715723811s
	I0813 21:05:06.432880  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.433135  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetIP
	I0813 21:05:06.438544  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.438953  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.438989  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.439107  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.439256  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.439723  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.439924  434236 ssh_runner.go:149] Run: systemctl --version
	I0813 21:05:06.439947  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:06.439996  434236 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:05:06.440042  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:06.446606  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.446709  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.447016  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.447068  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.447100  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.447105  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:06.447118  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.447316  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:06.447333  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.447509  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:06.447522  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.447659  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:05:06.447675  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:06.447813  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:05:02.137378  434036 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:05:02.137399  434036 containerd.go:517] Images already preloaded, skipping extraction
	I0813 21:05:02.137443  434036 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:02.178833  434036 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:05:02.178858  434036 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:05:02.178905  434036 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:05:02.219726  434036 cni.go:93] Creating CNI manager for ""
	I0813 21:05:02.219760  434036 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:05:02.219773  434036 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:05:02.219787  434036 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.180 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210813205952-393438 NodeName:old-k8s-version-20210813205952-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.83.180 CgroupDrive
r:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:05:02.219952  434036 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20210813205952-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210813205952-393438
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.180:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:05:02.220071  434036 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20210813205952-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.180 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210813205952-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:05:02.220131  434036 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0813 21:05:02.229626  434036 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:05:02.229694  434036 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:05:02.239705  434036 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (625 bytes)
	I0813 21:05:02.254486  434036 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:05:02.269654  434036 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0813 21:05:02.284962  434036 ssh_runner.go:149] Run: grep 192.168.83.180	control-plane.minikube.internal$ /etc/hosts
	I0813 21:05:02.290202  434036 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:05:02.302287  434036 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438 for IP: 192.168.83.180
	I0813 21:05:02.302334  434036 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:05:02.302352  434036 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:05:02.302412  434036 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.key
	I0813 21:05:02.302437  434036 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/apiserver.key.c79f34d7
	I0813 21:05:02.302462  434036 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/proxy-client.key
	I0813 21:05:02.302586  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:05:02.302634  434036 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:05:02.302645  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:05:02.302692  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:05:02.302756  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:05:02.302792  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:05:02.302845  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:02.304152  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:05:02.322598  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 21:05:02.339465  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:05:02.358152  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:05:02.378188  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:05:02.397511  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:05:02.415437  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:05:02.434086  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:05:02.452673  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:05:02.469777  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:05:02.485698  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:05:02.502082  434036 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:05:02.514329  434036 ssh_runner.go:149] Run: openssl version
	I0813 21:05:02.519755  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:05:02.527526  434036 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:05:02.532127  434036 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:05:02.532172  434036 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:05:02.537982  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:05:02.547421  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:05:02.557122  434036 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:02.561783  434036 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:02.561825  434036 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:02.568483  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:05:02.578597  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:05:02.586199  434036 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:05:02.591637  434036 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:05:02.591674  434036 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:05:02.597570  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:05:02.605363  434036 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210813205952-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.14.0 ClusterName:old-k8s-version-20210813205952-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.180 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:
true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:05:02.605459  434036 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:05:02.605491  434036 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:02.637033  434036 cri.go:76] found id: ""
	I0813 21:05:02.637074  434036 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:05:02.644103  434036 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:05:02.644121  434036 kubeadm.go:600] restartCluster start
	I0813 21:05:02.644161  434036 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:05:02.651608  434036 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:02.652905  434036 kubeconfig.go:117] verify returned: extract IP: "old-k8s-version-20210813205952-393438" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:05:02.653446  434036 kubeconfig.go:128] "old-k8s-version-20210813205952-393438" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:05:02.654463  434036 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:05:02.657561  434036 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:05:02.665300  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:02.665341  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:02.677034  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:02.877383  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:02.877444  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:02.889500  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:03.077691  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:03.077756  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:03.087473  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:03.277813  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:03.277909  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:03.288350  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:03.477578  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:03.477685  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:03.491136  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:03.677355  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:03.677428  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:03.689066  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:03.877274  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:03.877375  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:03.887744  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:04.078043  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:04.078134  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:04.088710  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:04.278031  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:04.278126  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:04.288175  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:04.477425  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:04.477506  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:04.487223  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:04.677604  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:04.677673  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:04.687327  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:04.877648  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:04.877749  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:04.887947  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.077168  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:05.077259  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:05.086969  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.277273  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:05.277351  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:05.286776  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.478052  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:05.478136  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:05.487219  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.677543  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:05.677629  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:05.686541  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.686554  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:05.686585  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:05.694779  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.694799  434036 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:05:05.694807  434036 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:05:05.694820  434036 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:05:05.694872  434036 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:05.724669  434036 cri.go:76] found id: ""
	I0813 21:05:05.724723  434036 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:05:05.738929  434036 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:05:05.746855  434036 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:05:05.746901  434036 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:05.753188  434036 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:05.753208  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:06.188862  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:06.941431  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:06.454088  434426 out.go:177] * Restarting existing kvm2 VM for "no-preload-20210813210044-393438" ...
	I0813 21:05:06.454119  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Start
	I0813 21:05:06.454272  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Ensuring networks are active...
	I0813 21:05:06.456218  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Ensuring network default is active
	I0813 21:05:06.456553  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Ensuring network mk-no-preload-20210813210044-393438 is active
	I0813 21:05:06.456952  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Getting domain xml...
	I0813 21:05:06.458950  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Creating domain...
	I0813 21:05:06.853802  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Waiting to get IP...
	I0813 21:05:06.854814  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:06.855297  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Found IP for machine: 192.168.61.54
	I0813 21:05:06.855335  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has current primary IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:06.855349  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Reserving static IP address...
	I0813 21:05:06.855696  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "no-preload-20210813210044-393438", mac: "52:54:00:e4:61:bf", ip: "192.168.61.54"} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:01:05 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:06.855734  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Reserved static IP address: 192.168.61.54
	I0813 21:05:06.855762  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | skip adding static IP to network mk-no-preload-20210813210044-393438 - found existing host DHCP lease matching {name: "no-preload-20210813210044-393438", mac: "52:54:00:e4:61:bf", ip: "192.168.61.54"}
	I0813 21:05:06.855795  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Getting to WaitForSSH function...
	I0813 21:05:06.855812  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Waiting for SSH to be available...
	I0813 21:05:06.860576  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:06.861034  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:01:05 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:06.861084  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:06.861251  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Using SSH client type: external
	I0813 21:05:06.861281  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa (-rw-------)
	I0813 21:05:06.861320  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:05:06.861340  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | About to run SSH command:
	I0813 21:05:06.861354  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | exit 0
	I0813 21:05:03.000913  434502 out.go:177] * Starting control plane node embed-certs-20210813210115-393438 in cluster embed-certs-20210813210115-393438
	I0813 21:05:03.000930  434502 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:05:03.000962  434502 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 21:05:03.000974  434502 cache.go:56] Caching tarball of preloaded images
	I0813 21:05:03.001073  434502 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 21:05:03.001089  434502 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 21:05:03.001201  434502 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/config.json ...
	I0813 21:05:03.001377  434502 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:05:03.001402  434502 start.go:313] acquiring machines lock for embed-certs-20210813210115-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:05:06.544163  434236 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:05:06.546237  434236 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:10.583772  434236 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.037505629s)
	I0813 21:05:10.583933  434236 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 21:05:10.584017  434236 ssh_runner.go:149] Run: which lz4
	I0813 21:05:10.589245  434236 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:05:10.593996  434236 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:05:10.594022  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0813 21:05:07.129333  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:07.186110  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:07.235916  434036 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:05:07.235998  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:07.748101  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:08.248184  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:08.748204  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:09.247459  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:09.747390  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:10.247498  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:10.748061  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:11.247289  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:11.747470  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:14.430768  434236 containerd.go:546] Took 3.841559 seconds to copy over tarball
	I0813 21:05:14.430846  434236 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:05:12.247435  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:12.747294  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:13.247274  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:13.747881  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:14.248154  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:14.747920  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:15.247841  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:15.747949  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:16.247615  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:16.747300  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:20.829404  434502 start.go:317] acquired machines lock for "embed-certs-20210813210115-393438" in 17.827980029s
	I0813 21:05:20.829452  434502 start.go:93] Skipping create...Using existing machine configuration
	I0813 21:05:20.829459  434502 fix.go:55] fixHost starting: 
	I0813 21:05:20.829940  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:05:20.830000  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:05:20.844289  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39205
	I0813 21:05:20.844710  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:05:20.845255  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:05:20.845286  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:05:20.845702  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:05:20.845894  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:20.846063  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetState
	I0813 21:05:20.849256  434502 fix.go:108] recreateIfNeeded on embed-certs-20210813210115-393438: state=Stopped err=<nil>
	I0813 21:05:20.849289  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	W0813 21:05:20.849461  434502 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 21:05:17.247659  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:17.747678  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:18.247917  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:18.747391  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:19.247353  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:19.748079  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:20.248063  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:20.747264  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:21.248202  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:21.747268  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:20.014334  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 21:05:20.014703  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetConfigRaw
	I0813 21:05:20.015453  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetIP
	I0813 21:05:20.020523  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.020887  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.020915  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.021216  434426 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/config.json ...
	I0813 21:05:20.021379  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.021570  434426 machine.go:88] provisioning docker machine ...
	I0813 21:05:20.021591  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.021761  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetMachineName
	I0813 21:05:20.021875  434426 buildroot.go:166] provisioning hostname "no-preload-20210813210044-393438"
	I0813 21:05:20.021895  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetMachineName
	I0813 21:05:20.022022  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.026141  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.026506  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.026562  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.026646  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.026821  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.026940  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.027078  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.027227  434426 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:20.027464  434426 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0813 21:05:20.027484  434426 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210813210044-393438 && echo "no-preload-20210813210044-393438" | sudo tee /etc/hostname
	I0813 21:05:20.147229  434426 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210813210044-393438
	
	I0813 21:05:20.147257  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.152107  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.152466  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.152497  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.152625  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.152801  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.152950  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.153071  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.153211  434426 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:20.153354  434426 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0813 21:05:20.153384  434426 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210813210044-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210813210044-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210813210044-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:05:20.272763  434426 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:05:20.272797  434426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:05:20.272825  434426 buildroot.go:174] setting up certificates
	I0813 21:05:20.272839  434426 provision.go:83] configureAuth start
	I0813 21:05:20.272855  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetMachineName
	I0813 21:05:20.273158  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetIP
	I0813 21:05:20.279037  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.279386  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.279412  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.279522  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.283673  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.284008  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.284038  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.284148  434426 provision.go:138] copyHostCerts
	I0813 21:05:20.284212  434426 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:05:20.284240  434426 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:05:20.284281  434426 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:05:20.284383  434426 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:05:20.284395  434426 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:05:20.284419  434426 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:05:20.284490  434426 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:05:20.284499  434426 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:05:20.284529  434426 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 21:05:20.284602  434426 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210813210044-393438 san=[192.168.61.54 192.168.61.54 localhost 127.0.0.1 minikube no-preload-20210813210044-393438]
	I0813 21:05:20.460696  434426 provision.go:172] copyRemoteCerts
	I0813 21:05:20.460754  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:05:20.460784  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.465578  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.465824  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.465849  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.466020  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.466187  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.466318  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.466426  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:05:20.549980  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:05:20.566659  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 21:05:20.582025  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:05:20.597459  434426 provision.go:86] duration metric: configureAuth took 324.607454ms
	I0813 21:05:20.597480  434426 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:05:20.597630  434426 config.go:177] Loaded profile config "no-preload-20210813210044-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:05:20.597641  434426 machine.go:91] provisioned docker machine in 576.056768ms
	I0813 21:05:20.597648  434426 start.go:267] post-start starting for "no-preload-20210813210044-393438" (driver="kvm2")
	I0813 21:05:20.597654  434426 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:05:20.597676  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.598063  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:05:20.598109  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.603075  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.603418  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.603449  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.603543  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.603679  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.603801  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.603939  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:05:20.685964  434426 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:05:20.690151  434426 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:05:20.690173  434426 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:05:20.690229  434426 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:05:20.690331  434426 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 21:05:20.690448  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:05:20.697161  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:20.712227  434426 start.go:270] post-start completed in 114.567301ms
	I0813 21:05:20.712262  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.712551  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.717402  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.717730  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.717763  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.717833  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.718006  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.718113  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.718254  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.718395  434426 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:20.718528  434426 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0813 21:05:20.718538  434426 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:05:20.829132  434426 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888720.727275171
	
	I0813 21:05:20.829162  434426 fix.go:212] guest clock: 1628888720.727275171
	I0813 21:05:20.829172  434426 fix.go:225] Guest: 2021-08-13 21:05:20.727275171 +0000 UTC Remote: 2021-08-13 21:05:20.71253417 +0000 UTC m=+18.577992282 (delta=14.741001ms)
	I0813 21:05:20.829200  434426 fix.go:196] guest clock delta is within tolerance: 14.741001ms
	I0813 21:05:20.829208  434426 fix.go:57] fixHost completed within 14.396219786s
	I0813 21:05:20.829214  434426 start.go:80] releasing machines lock for "no-preload-20210813210044-393438", held for 14.39625577s
	I0813 21:05:20.829267  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.829537  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetIP
	I0813 21:05:20.835175  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.835525  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.835557  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.835718  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.835862  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.836588  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.836852  434426 ssh_runner.go:149] Run: systemctl --version
	I0813 21:05:20.836880  434426 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:05:20.836883  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.836925  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.842008  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.842283  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.842316  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.842429  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.842589  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.842829  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.843009  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:05:20.843166  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.843479  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.843508  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.843740  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.843914  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.844086  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.844212  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:05:20.925525  434426 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:05:20.925625  434426 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 21:05:20.961756  434426 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 21:05:20.972595  434426 docker.go:153] disabling docker service ...
	I0813 21:05:20.972647  434426 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:05:20.984043  434426 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:05:20.997048  434426 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:05:21.143159  434426 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:05:21.280324  434426 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:05:21.292461  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:05:21.309628  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 21:05:21.326009  434426 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:05:21.333161  434426 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:05:21.333225  434426 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:05:21.347027  434426 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:05:21.353239  434426 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:05:21.490905  434426 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:05:21.539895  434426 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 21:05:21.539979  434426 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:05:21.545304  434426 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 21:05:25.480460  434236 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.049581044s)
	I0813 21:05:25.480492  434236 containerd.go:553] Took 11.049688 seconds t extract the tarball
	I0813 21:05:25.480507  434236 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:05:25.541290  434236 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:05:25.688551  434236 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:05:25.730844  434236 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 21:05:25.772570  434236 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 21:05:25.789555  434236 docker.go:153] disabling docker service ...
	I0813 21:05:25.789606  434236 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:05:25.809341  434236 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:05:25.819728  434236 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:05:25.989268  434236 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:05:26.146250  434236 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:05:26.161385  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:05:26.178481  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 21:05:26.203180  434236 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:05:26.212763  434236 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:05:26.212829  434236 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:05:26.234818  434236 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:05:26.242186  434236 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:05:26.380655  434236 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:05:26.412767  434236 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 21:05:26.412855  434236 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:05:26.422428  434236 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 21:05:22.247279  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:24.748176  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:25.247258  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:25.747908  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:26.247535  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:26.747227  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:22.650397  434426 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:05:24.541137  434426 start.go:413] Will wait 60s for crictl version
	I0813 21:05:24.541205  434426 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:05:24.597599  434426 retry.go:31] will retry after 14.405090881s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T21:05:24Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 21:05:27.527828  434236 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:05:27.534223  434236 start.go:413] Will wait 60s for crictl version
	I0813 21:05:27.534302  434236 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:05:27.569887  434236 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 21:05:27.569975  434236 ssh_runner.go:149] Run: containerd --version
	I0813 21:05:27.602589  434236 ssh_runner.go:149] Run: containerd --version
	I0813 21:05:24.120446  434502 out.go:177] * Restarting existing kvm2 VM for "embed-certs-20210813210115-393438" ...
	I0813 21:05:24.527111  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .Start
	I0813 21:05:24.527410  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Ensuring networks are active...
	I0813 21:05:24.530247  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Ensuring network default is active
	I0813 21:05:24.530813  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Ensuring network mk-embed-certs-20210813210115-393438 is active
	I0813 21:05:24.531296  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Getting domain xml...
	I0813 21:05:24.533230  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Creating domain...
	I0813 21:05:25.823170  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Waiting to get IP...
	I0813 21:05:25.824727  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:25.825288  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Found IP for machine: 192.168.72.95
	I0813 21:05:25.825326  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has current primary IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:25.825338  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Reserving static IP address...
	I0813 21:05:25.825778  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "embed-certs-20210813210115-393438", mac: "52:54:00:f7:8f:97", ip: "192.168.72.95"} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:25.825808  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Reserved static IP address: 192.168.72.95
	I0813 21:05:25.825840  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | skip adding static IP to network mk-embed-certs-20210813210115-393438 - found existing host DHCP lease matching {name: "embed-certs-20210813210115-393438", mac: "52:54:00:f7:8f:97", ip: "192.168.72.95"}
	I0813 21:05:25.825864  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Getting to WaitForSSH function...
	I0813 21:05:25.825878  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Waiting for SSH to be available...
	I0813 21:05:25.831617  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:25.832074  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:25.832097  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:25.832291  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH client type: external
	I0813 21:05:25.832319  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa (-rw-------)
	I0813 21:05:25.832358  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:05:25.832368  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | About to run SSH command:
	I0813 21:05:25.832379  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | exit 0
	I0813 21:05:27.634975  434236 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 21:05:27.635046  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetIP
	I0813 21:05:27.640718  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:27.641057  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:27.641096  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:27.641226  434236 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 21:05:27.645735  434236 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:05:27.656156  434236 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:05:27.656227  434236 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:27.690217  434236 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:05:27.690235  434236 containerd.go:517] Images already preloaded, skipping extraction
	I0813 21:05:27.690276  434236 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:27.719945  434236 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:05:27.719970  434236 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:05:27.720022  434236 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:05:27.757584  434236 cni.go:93] Creating CNI manager for ""
	I0813 21:05:27.757608  434236 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:05:27.757619  434236 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:05:27.757635  434236 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210813210121-393438 NodeName:default-k8s-different-port-20210813210121-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.
168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:05:27.757797  434236 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210813210121-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:05:27.757917  434236 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210813210121-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.163 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210121-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0813 21:05:27.757983  434236 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 21:05:27.767025  434236 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:05:27.767092  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:05:27.776126  434236 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (564 bytes)
	I0813 21:05:27.790927  434236 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:05:27.803474  434236 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0813 21:05:27.816691  434236 ssh_runner.go:149] Run: grep 192.168.39.163	control-plane.minikube.internal$ /etc/hosts
	I0813 21:05:27.820882  434236 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:05:27.833898  434236 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438 for IP: 192.168.39.163
	I0813 21:05:27.833958  434236 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:05:27.833985  434236 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:05:27.834066  434236 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.key
	I0813 21:05:27.834099  434236 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/apiserver.key.a64e5ae8
	I0813 21:05:27.834123  434236 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/proxy-client.key
	I0813 21:05:27.834281  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:05:27.834389  434236 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:05:27.834408  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:05:27.834446  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:05:27.834483  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:05:27.834513  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:05:27.834565  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:27.835926  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:05:27.868872  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:05:27.898347  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:05:27.927284  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:05:27.955412  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:05:27.982208  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:05:28.008804  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:05:28.037793  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:05:28.062318  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:05:28.090589  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:05:28.113355  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:05:28.139026  434236 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:05:28.159732  434236 ssh_runner.go:149] Run: openssl version
	I0813 21:05:28.167042  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:05:28.179222  434236 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:05:28.187603  434236 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:05:28.187662  434236 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:05:28.197216  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:05:28.205497  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:05:28.214015  434236 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:05:28.219020  434236 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:05:28.219071  434236 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:05:28.225521  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:05:28.233782  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:05:28.241654  434236 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:28.247002  434236 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:28.247039  434236 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:28.253102  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:05:28.262141  434236 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210813210121-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210121-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.163 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_
ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:05:28.262269  434236 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:05:28.262329  434236 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:28.299554  434236 cri.go:76] found id: ""
	I0813 21:05:28.299631  434236 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:05:28.311680  434236 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:05:28.311708  434236 kubeadm.go:600] restartCluster start
	I0813 21:05:28.311755  434236 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:05:28.323986  434236 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:28.324997  434236 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210813210121-393438" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:05:28.325273  434236 kubeconfig.go:128] "default-k8s-different-port-20210813210121-393438" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:05:28.325795  434236 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:05:28.330761  434236 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:05:28.340580  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:28.340634  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:28.355687  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:28.556114  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:28.556194  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:28.570589  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:28.755828  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:28.755920  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:28.768185  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:28.956502  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:28.956587  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:28.972504  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:29.156801  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:29.156900  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:29.170261  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:29.356543  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:29.356620  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:29.371862  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:29.556146  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:29.556222  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:29.568431  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:29.756624  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:29.756716  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:29.772102  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:29.956366  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:29.956434  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:29.972817  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:30.156118  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:30.156213  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:30.173007  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:30.356290  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:30.356390  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:30.370997  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:30.556270  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:30.556363  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:30.569087  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:30.756426  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:30.756499  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:30.766215  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:30.956579  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:30.956667  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:30.966316  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:31.156673  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:31.156748  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:31.166867  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:31.355975  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:31.356076  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:31.367186  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:31.367210  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:31.367256  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:31.377313  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:31.377340  434236 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:05:31.377351  434236 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:05:31.377369  434236 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:05:31.377436  434236 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:31.420871  434236 cri.go:76] found id: ""
	I0813 21:05:31.420936  434236 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:05:31.437839  434236 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:05:31.447696  434236 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:05:31.447749  434236 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:31.456183  434236 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:31.456200  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:27.248144  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:27.747426  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:28.247901  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:28.747856  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:29.248076  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:29.747972  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:30.248094  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:30.747545  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:31.247708  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:31.747454  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:31.730142  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:32.610832  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:32.904123  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:33.089553  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:33.245454  434236 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:05:33.245536  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:33.762559  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:34.263157  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:34.762910  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:35.263052  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:35.762700  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:36.262410  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:32.247698  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:32.747521  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:33.247272  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:33.748045  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:34.247556  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:34.747910  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:35.247445  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:35.747731  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:36.247209  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:36.747718  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:34.915344  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:05:34.915378  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:05:34.915389  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | command : exit 0
	I0813 21:05:34.915403  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | err     : exit status 255
	I0813 21:05:34.915415  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | output  : 
	I0813 21:05:39.004545  434426 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:05:39.044544  434426 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 21:05:39.044628  434426 ssh_runner.go:149] Run: containerd --version
	I0813 21:05:39.121568  434426 ssh_runner.go:149] Run: containerd --version
	I0813 21:05:36.763121  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:37.262994  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:37.762859  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:38.263086  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:38.763215  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:39.262402  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:39.762448  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:40.262367  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:40.762311  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:41.262754  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:37.247466  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:37.747360  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:38.248246  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:38.748104  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:39.247303  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:39.748087  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:40.247393  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:40.747662  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:41.247902  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:41.748222  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:39.186148  434426 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0813 21:05:39.186217  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetIP
	I0813 21:05:39.191526  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:39.191849  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:39.191873  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:39.192100  434426 ssh_runner.go:149] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0813 21:05:39.196656  434426 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:05:39.209929  434426 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:05:39.209982  434426 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:39.252981  434426 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:05:39.253001  434426 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:05:39.253043  434426 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:05:39.289557  434426 cni.go:93] Creating CNI manager for ""
	I0813 21:05:39.289587  434426 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:05:39.289599  434426 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:05:39.289616  434426 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.54 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210813210044-393438 NodeName:no-preload-20210813210044-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.61.54 CgroupDriver:cgroup
fs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:05:39.289783  434426 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20210813210044-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:05:39.289902  434426 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20210813210044-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.54 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813210044-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:05:39.289962  434426 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:05:39.301767  434426 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:05:39.301858  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:05:39.312854  434426 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (552 bytes)
	I0813 21:05:39.336182  434426 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:05:39.354752  434426 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes)
	I0813 21:05:39.371503  434426 ssh_runner.go:149] Run: grep 192.168.61.54	control-plane.minikube.internal$ /etc/hosts
	I0813 21:05:39.375918  434426 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:05:39.387685  434426 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438 for IP: 192.168.61.54
	I0813 21:05:39.387740  434426 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:05:39.387764  434426 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:05:39.387864  434426 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.key
	I0813 21:05:39.387889  434426 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/apiserver.key.f8b022bd
	I0813 21:05:39.387919  434426 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/proxy-client.key
	I0813 21:05:39.388040  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:05:39.388086  434426 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:05:39.388102  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:05:39.388136  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:05:39.388188  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:05:39.388249  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:05:39.388315  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:39.389647  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:05:39.417065  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:05:39.450225  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:05:39.473274  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:05:39.496042  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:05:39.518685  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:05:39.541018  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:05:39.566796  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:05:39.596134  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:05:39.617569  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:05:39.639014  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:05:39.667571  434426 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:05:39.681262  434426 ssh_runner.go:149] Run: openssl version
	I0813 21:05:39.687737  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:05:39.695800  434426 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:39.700510  434426 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:39.700555  434426 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:39.708604  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:05:39.718381  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:05:39.728624  434426 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:05:39.734280  434426 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:05:39.734329  434426 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:05:39.742886  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:05:39.754769  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:05:39.769034  434426 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:05:39.776798  434426 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:05:39.776849  434426 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:05:39.785325  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:05:39.793427  434426 kubeadm.go:390] StartCluster: {Name:no-preload-20210813210044-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22
.0-rc.0 ClusterName:no-preload-20210813210044-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.54 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:
true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:05:39.793515  434426 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:05:39.793567  434426 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:39.841767  434426 cri.go:76] found id: "a5ca654816273571ad39ae304722652989f2a69e9ccd0256ccf23f4cbc244abd"
	I0813 21:05:39.841797  434426 cri.go:76] found id: "0d285b2e29499c2e1d9b734b49c97a04b18540b7360ed9e34e8acfd407100d67"
	I0813 21:05:39.841805  434426 cri.go:76] found id: "cf6143a55b051d9efc422092ace8c862445c4967a18ee739bf39cfad5460448e"
	I0813 21:05:39.841810  434426 cri.go:76] found id: "1a65e64cbc9f06e6ddf3d6194452927f859afa1b62ed7d907245763f06fec645"
	I0813 21:05:39.841815  434426 cri.go:76] found id: "f781a92e61ada43905b902c2ac9fca7404b8495aee2af7d7795afb32857f23e4"
	I0813 21:05:39.841825  434426 cri.go:76] found id: "1cf854ac4e58590f5719949ac3de604a490ab8ae41cc5dfec30aaee4cfa86aa1"
	I0813 21:05:39.841840  434426 cri.go:76] found id: "f25d2b1892e38d48bee5b2f604058fa84fc6504d779b29320f01da144a8d3402"
	I0813 21:05:39.841848  434426 cri.go:76] found id: ""
	I0813 21:05:39.841897  434426 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:05:39.874659  434426 cri.go:103] JSON = null
	W0813 21:05:39.874731  434426 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0813 21:05:39.874793  434426 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:05:39.883098  434426 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:05:39.883133  434426 kubeadm.go:600] restartCluster start
	I0813 21:05:39.883183  434426 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:05:39.891774  434426 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:39.892947  434426 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210813210044-393438" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:05:39.893403  434426 kubeconfig.go:128] "no-preload-20210813210044-393438" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:05:39.894254  434426 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:05:39.897519  434426 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:05:39.904898  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:39.904947  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:39.914791  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:40.115222  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:40.115313  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:40.130011  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:40.315354  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:40.315437  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:40.328987  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:40.515338  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:40.515430  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:40.526348  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:40.715644  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:40.715729  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:40.727876  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:40.915221  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:40.915304  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:40.925675  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:41.115841  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:41.115917  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:41.129609  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:41.315884  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:41.315953  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:41.331198  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:41.514920  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:41.515010  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:41.531118  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:41.715384  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:41.715481  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:41.732231  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:41.915489  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:41.915579  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:41.928849  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.115137  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.115246  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.129931  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:37.915942  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Getting to WaitForSSH function...
	I0813 21:05:37.920675  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:37.921020  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:37.921054  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:37.921190  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH client type: external
	I0813 21:05:37.921223  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa (-rw-------)
	I0813 21:05:37.921271  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:05:37.921289  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | About to run SSH command:
	I0813 21:05:37.921302  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | exit 0
	I0813 21:05:41.762446  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:42.263222  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:42.762296  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:43.262498  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:43.762232  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:44.262236  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:44.762300  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.262217  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.762238  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:46.262311  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:42.248257  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:42.747441  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:43.247903  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:43.747303  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:44.248179  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:44.747586  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.248077  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.748033  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:46.247697  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:46.747426  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:42.315098  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.315169  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.326811  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.515062  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.515137  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.525584  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.715863  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.715951  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.729056  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.915375  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.915483  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.926975  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.926999  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.927046  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.937063  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.937085  434426 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:05:42.937092  434426 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:05:42.937109  434426 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:05:42.937157  434426 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:42.972880  434426 cri.go:76] found id: "a5ca654816273571ad39ae304722652989f2a69e9ccd0256ccf23f4cbc244abd"
	I0813 21:05:42.972899  434426 cri.go:76] found id: "0d285b2e29499c2e1d9b734b49c97a04b18540b7360ed9e34e8acfd407100d67"
	I0813 21:05:42.972908  434426 cri.go:76] found id: "cf6143a55b051d9efc422092ace8c862445c4967a18ee739bf39cfad5460448e"
	I0813 21:05:42.972912  434426 cri.go:76] found id: "1a65e64cbc9f06e6ddf3d6194452927f859afa1b62ed7d907245763f06fec645"
	I0813 21:05:42.972917  434426 cri.go:76] found id: "f781a92e61ada43905b902c2ac9fca7404b8495aee2af7d7795afb32857f23e4"
	I0813 21:05:42.972923  434426 cri.go:76] found id: "1cf854ac4e58590f5719949ac3de604a490ab8ae41cc5dfec30aaee4cfa86aa1"
	I0813 21:05:42.972928  434426 cri.go:76] found id: "f25d2b1892e38d48bee5b2f604058fa84fc6504d779b29320f01da144a8d3402"
	I0813 21:05:42.972933  434426 cri.go:76] found id: ""
	I0813 21:05:42.972940  434426 cri.go:221] Stopping containers: [a5ca654816273571ad39ae304722652989f2a69e9ccd0256ccf23f4cbc244abd 0d285b2e29499c2e1d9b734b49c97a04b18540b7360ed9e34e8acfd407100d67 cf6143a55b051d9efc422092ace8c862445c4967a18ee739bf39cfad5460448e 1a65e64cbc9f06e6ddf3d6194452927f859afa1b62ed7d907245763f06fec645 f781a92e61ada43905b902c2ac9fca7404b8495aee2af7d7795afb32857f23e4 1cf854ac4e58590f5719949ac3de604a490ab8ae41cc5dfec30aaee4cfa86aa1 f25d2b1892e38d48bee5b2f604058fa84fc6504d779b29320f01da144a8d3402]
	I0813 21:05:42.972988  434426 ssh_runner.go:149] Run: which crictl
	I0813 21:05:42.977301  434426 ssh_runner.go:149] Run: sudo /bin/crictl stop a5ca654816273571ad39ae304722652989f2a69e9ccd0256ccf23f4cbc244abd 0d285b2e29499c2e1d9b734b49c97a04b18540b7360ed9e34e8acfd407100d67 cf6143a55b051d9efc422092ace8c862445c4967a18ee739bf39cfad5460448e 1a65e64cbc9f06e6ddf3d6194452927f859afa1b62ed7d907245763f06fec645 f781a92e61ada43905b902c2ac9fca7404b8495aee2af7d7795afb32857f23e4 1cf854ac4e58590f5719949ac3de604a490ab8ae41cc5dfec30aaee4cfa86aa1 f25d2b1892e38d48bee5b2f604058fa84fc6504d779b29320f01da144a8d3402
	I0813 21:05:43.015098  434426 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:05:43.030422  434426 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:05:43.038207  434426 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:05:43.038258  434426 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:43.046083  434426 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:43.046108  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:43.234184  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:44.031394  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:44.286845  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:44.434332  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:44.561020  434426 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:05:44.561093  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.072644  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.572512  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:46.072935  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:46.572209  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.072820  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:44.070383  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:05:44.070417  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:05:44.070425  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | command : exit 0
	I0813 21:05:44.070435  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | err     : exit status 255
	I0813 21:05:44.070444  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | output  : 
	I0813 21:05:47.070752  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Getting to WaitForSSH function...
	I0813 21:05:47.075748  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.076089  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.076124  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.076266  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH client type: external
	I0813 21:05:47.076301  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa (-rw-------)
	I0813 21:05:47.076341  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:05:47.076360  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | About to run SSH command:
	I0813 21:05:47.076373  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | exit 0
	I0813 21:05:47.209990  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 21:05:47.210279  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetConfigRaw
	I0813 21:05:47.210980  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetIP
	I0813 21:05:47.215599  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.215971  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.216004  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.216197  434502 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/config.json ...
	I0813 21:05:47.216352  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:47.216531  434502 machine.go:88] provisioning docker machine ...
	I0813 21:05:47.216560  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:47.216747  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetMachineName
	I0813 21:05:47.216909  434502 buildroot.go:166] provisioning hostname "embed-certs-20210813210115-393438"
	I0813 21:05:47.216930  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetMachineName
	I0813 21:05:47.217053  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.221366  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.221681  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.221711  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.221783  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:47.221941  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.222076  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.222174  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:47.222331  434502 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:47.222497  434502 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0813 21:05:47.222516  434502 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210813210115-393438 && echo "embed-certs-20210813210115-393438" | sudo tee /etc/hostname
	I0813 21:05:47.350613  434502 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210813210115-393438
	
	I0813 21:05:47.350646  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.355442  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.355764  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.355801  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.355886  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:47.356046  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.356191  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.356328  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:47.356481  434502 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:47.356629  434502 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0813 21:05:47.356649  434502 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210813210115-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210813210115-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210813210115-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:05:47.480637  434502 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:05:47.480667  434502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:05:47.480689  434502 buildroot.go:174] setting up certificates
	I0813 21:05:47.480699  434502 provision.go:83] configureAuth start
	I0813 21:05:47.480708  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetMachineName
	I0813 21:05:47.480943  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetIP
	I0813 21:05:47.485661  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.485926  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.485958  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.486060  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.490062  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.490323  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.490355  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.490435  434502 provision.go:138] copyHostCerts
	I0813 21:05:47.490506  434502 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:05:47.490518  434502 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:05:47.490574  434502 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 21:05:47.490661  434502 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:05:47.490683  434502 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:05:47.490709  434502 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:05:47.490777  434502 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:05:47.490788  434502 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:05:47.490812  434502 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:05:47.490871  434502 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210813210115-393438 san=[192.168.72.95 192.168.72.95 localhost 127.0.0.1 minikube embed-certs-20210813210115-393438]
	I0813 21:05:47.660627  434502 provision.go:172] copyRemoteCerts
	I0813 21:05:47.660682  434502 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:05:47.660708  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.664944  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.665226  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.665254  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.665398  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:47.665520  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.665651  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:47.665742  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:05:47.750640  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:05:47.766712  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 21:05:47.782725  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 21:05:47.797861  434502 provision.go:86] duration metric: configureAuth took 317.153687ms
	I0813 21:05:47.797877  434502 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:05:47.798055  434502 config.go:177] Loaded profile config "embed-certs-20210813210115-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:05:47.798071  434502 machine.go:91] provisioned docker machine in 581.525098ms
	I0813 21:05:47.798080  434502 start.go:267] post-start starting for "embed-certs-20210813210115-393438" (driver="kvm2")
	I0813 21:05:47.798088  434502 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:05:47.798118  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:47.798393  434502 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:05:47.798425  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.802968  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.803268  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.803301  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.803366  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:47.803537  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.803688  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:47.803819  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:05:46.762882  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.263194  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.762994  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.262820  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.762597  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.262687  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.762934  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.262621  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.763234  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:51.262884  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.247753  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.747493  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.247858  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.748175  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.247346  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.747616  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.247612  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.748066  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:51.247916  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:51.748114  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.573012  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.072262  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.573005  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.072918  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.572925  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.072383  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.572832  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:51.072251  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:51.572409  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.072071  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.889883  434502 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:05:47.894242  434502 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:05:47.894265  434502 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:05:47.894308  434502 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:05:47.894408  434502 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 21:05:47.894509  434502 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:05:47.900616  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:47.916161  434502 start.go:270] post-start completed in 118.068413ms
	I0813 21:05:47.916192  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:47.916429  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.921006  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.921302  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.921333  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.921403  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:47.921548  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.921671  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.921788  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:47.921919  434502 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:47.922054  434502 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0813 21:05:47.922064  434502 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:05:48.039157  434502 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888747.971364911
	
	I0813 21:05:48.039178  434502 fix.go:212] guest clock: 1628888747.971364911
	I0813 21:05:48.039185  434502 fix.go:225] Guest: 2021-08-13 21:05:47.971364911 +0000 UTC Remote: 2021-08-13 21:05:47.916414238 +0000 UTC m=+45.083995073 (delta=54.950673ms)
	I0813 21:05:48.039202  434502 fix.go:196] guest clock delta is within tolerance: 54.950673ms
	I0813 21:05:48.039209  434502 fix.go:57] fixHost completed within 27.209750115s
	I0813 21:05:48.039214  434502 start.go:80] releasing machines lock for "embed-certs-20210813210115-393438", held for 27.209782445s
	I0813 21:05:48.039259  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:48.039507  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetIP
	I0813 21:05:48.044093  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.044367  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:48.044399  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.044490  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:48.044649  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:48.045063  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:48.045313  434502 ssh_runner.go:149] Run: systemctl --version
	I0813 21:05:48.045327  434502 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:05:48.045335  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:48.045360  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:48.050777  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.051103  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:48.051134  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.051228  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:48.051380  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:48.051545  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:48.051692  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:05:48.051871  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.052188  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:48.052219  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.052357  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:48.052507  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:48.052648  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:48.052780  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:05:48.173452  434502 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:05:48.173567  434502 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:52.198912  434502 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.02531638s)
	I0813 21:05:52.199078  434502 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 21:05:52.199145  434502 ssh_runner.go:149] Run: which lz4
	I0813 21:05:52.204358  434502 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:05:52.209369  434502 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:05:52.209398  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0813 21:05:51.763159  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.263132  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.763171  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:53.263124  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:53.762320  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:54.263218  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:54.762305  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:55.262842  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:55.763095  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:56.263173  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.248079  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.748031  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:53.247898  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:53.747859  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:54.248141  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:54.747850  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:55.247349  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:55.747304  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:56.247906  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:56.748393  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.573023  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.599967  434426 api_server.go:70] duration metric: took 8.038946734s to wait for apiserver process to appear ...
	I0813 21:05:52.599996  434426 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:05:52.600009  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:05:52.600694  434426 api_server.go:255] stopped: https://192.168.61.54:8443/healthz: Get "https://192.168.61.54:8443/healthz": dial tcp 192.168.61.54:8443: connect: connection refused
	I0813 21:05:53.101216  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:05:53.101986  434426 api_server.go:255] stopped: https://192.168.61.54:8443/healthz: Get "https://192.168.61.54:8443/healthz": dial tcp 192.168.61.54:8443: connect: connection refused
	I0813 21:05:53.601685  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:05:56.328428  434502 containerd.go:546] Took 4.124102 seconds to copy over tarball
	I0813 21:05:56.328505  434502 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:05:58.602269  434426 api_server.go:255] stopped: https://192.168.61.54:8443/healthz: Get "https://192.168.61.54:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:05:59.101446  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:05:59.226644  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:05:59.226678  434426 api_server.go:101] status: https://192.168.61.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:05:59.601108  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:05:59.611587  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:59.611615  434426 api_server.go:101] status: https://192.168.61.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:06:00.101414  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:06:00.119971  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:06:00.120000  434426 api_server.go:101] status: https://192.168.61.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:06:00.601622  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:06:00.623033  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:06:00.623063  434426 api_server.go:101] status: https://192.168.61.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:06:01.101661  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:06:01.109930  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 200:
	ok
	I0813 21:06:01.121432  434426 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:06:01.121468  434426 api_server.go:129] duration metric: took 8.521464894s to wait for apiserver health ...
	I0813 21:06:01.121481  434426 cni.go:93] Creating CNI manager for ""
	I0813 21:06:01.121494  434426 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:05:56.763214  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:57.262238  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:57.762416  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:58.263249  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:58.762893  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:59.262377  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:59.762933  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:00.263263  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:00.762710  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:01.262528  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:57.247736  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:57.747851  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:58.247940  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:58.748248  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:59.247931  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:59.747954  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:00.247906  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:00.747960  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:01.247883  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:01.262304  434036 api_server.go:70] duration metric: took 54.026388934s to wait for apiserver process to appear ...
	I0813 21:06:01.262331  434036 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:06:01.262343  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:01.123271  434426 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:06:01.123346  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:06:01.137091  434426 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:06:01.193557  434426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:06:01.214072  434426 system_pods.go:59] 8 kube-system pods found
	I0813 21:06:01.214117  434426 system_pods.go:61] "coredns-78fcd69978-f47dd" [4aec428d-547b-4d87-bc39-78bbccb8baea] Running
	I0813 21:06:01.214125  434426 system_pods.go:61] "etcd-no-preload-20210813210044-393438" [7a80ae51-8063-4d28-8ccb-c0cfcfe14c33] Running
	I0813 21:06:01.214131  434426 system_pods.go:61] "kube-apiserver-no-preload-20210813210044-393438" [db01fbc4-e895-4457-bff3-53cff9d0699a] Running
	I0813 21:06:01.214146  434426 system_pods.go:61] "kube-controller-manager-no-preload-20210813210044-393438" [035b8aaf-5080-423b-844e-4f0a28bd0c3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 21:06:01.214154  434426 system_pods.go:61] "kube-proxy-jl8gn" [20fe4049-f327-444e-8e06-19de55971a1e] Running
	I0813 21:06:01.214165  434426 system_pods.go:61] "kube-scheduler-no-preload-20210813210044-393438" [3e93dca1-6885-4de5-8d71-8597dab2a441] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 21:06:01.214175  434426 system_pods.go:61] "metrics-server-7c784ccb57-9bt6z" [17511551-ab42-48c3-adf3-3221e19fc573] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:06:01.214187  434426 system_pods.go:61] "storage-provisioner" [9488dce9-830b-44a7-93d1-cfb9d1d96514] Running
	I0813 21:06:01.214196  434426 system_pods.go:74] duration metric: took 20.612475ms to wait for pod list to return data ...
	I0813 21:06:01.214210  434426 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:06:01.222653  434426 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:06:01.222708  434426 node_conditions.go:123] node cpu capacity is 2
	I0813 21:06:01.222727  434426 node_conditions.go:105] duration metric: took 8.511383ms to run NodePressure ...
	I0813 21:06:01.222759  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:01.762658  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:05.763242  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:06.263188  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:05.242867  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:06:05.255235  434036 api_server.go:101] status: https://192.168.83.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:06:05.756016  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:07.324246  434502 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.995714396s)
	I0813 21:06:07.324273  434502 containerd.go:553] Took 10.995815 seconds t extract the tarball
	I0813 21:06:07.324286  434502 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:06:07.386418  434502 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:06:07.538846  434502 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:06:07.603956  434502 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 21:06:07.647754  434502 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 21:06:07.660926  434502 docker.go:153] disabling docker service ...
	I0813 21:06:07.660980  434502 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:06:07.672885  434502 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:06:07.684627  434502 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:06:07.818921  434502 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:06:07.320076  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 21:06:07.320113  434036 api_server.go:101] status: https://192.168.83.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 21:06:07.755543  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:07.994267  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 21:06:07.994311  434036 api_server.go:101] status: https://192.168.83.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 21:06:08.255650  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:08.313630  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 21:06:08.313672  434036 api_server.go:101] status: https://192.168.83.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 21:06:08.756343  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:08.766284  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 21:06:08.766312  434036 api_server.go:101] status: https://192.168.83.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 21:06:09.255426  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:09.264852  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 200:
	ok
	I0813 21:06:09.275355  434036 api_server.go:139] control plane version: v1.14.0
	I0813 21:06:09.275378  434036 api_server.go:129] duration metric: took 8.013041101s to wait for apiserver health ...
	I0813 21:06:09.275391  434036 cni.go:93] Creating CNI manager for ""
	I0813 21:06:09.275400  434036 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:06:07.977070  434502 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:06:07.994507  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:06:08.014648  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 21:06:08.032466  434502 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:06:08.041649  434502 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:06:08.041710  434502 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:06:08.067719  434502 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:06:08.077684  434502 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:06:08.247330  434502 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:06:08.315801  434502 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 21:06:08.315881  434502 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:06:08.322446  434502 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 21:06:09.427340  434502 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:06:09.434702  434502 start.go:413] Will wait 60s for crictl version
	I0813 21:06:09.434773  434502 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:06:09.475662  434502 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 21:06:09.475749  434502 ssh_runner.go:149] Run: containerd --version
	I0813 21:06:09.516486  434502 ssh_runner.go:149] Run: containerd --version
	I0813 21:06:06.763215  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:07.263058  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:07.762852  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:08.262800  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:08.763094  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:09.262916  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:09.762562  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:10.262282  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:10.763196  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:11.263123  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:09.277075  434036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:06:09.277166  434036 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:06:09.297330  434036 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:06:09.320632  434036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:06:09.333565  434036 system_pods.go:59] 8 kube-system pods found
	I0813 21:06:09.333597  434036 system_pods.go:61] "coredns-fb8b8dccf-sgnld" [a92bab36-fc79-11eb-9c66-525400553b5e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:06:09.333602  434036 system_pods.go:61] "etcd-old-k8s-version-20210813205952-393438" [ccce6bed-fc79-11eb-9c66-525400553b5e] Running
	I0813 21:06:09.333609  434036 system_pods.go:61] "kube-apiserver-old-k8s-version-20210813205952-393438" [cb9d3317-fc79-11eb-9c66-525400553b5e] Running
	I0813 21:06:09.333616  434036 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210813205952-393438" [bff107c1-fc79-11eb-9c66-525400553b5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 21:06:09.333623  434036 system_pods.go:61] "kube-proxy-zrnsp" [a94a53aa-fc79-11eb-9c66-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 21:06:09.333629  434036 system_pods.go:61] "kube-scheduler-old-k8s-version-20210813205952-393438" [c809ace6-fc79-11eb-9c66-525400553b5e] Running
	I0813 21:06:09.333635  434036 system_pods.go:61] "metrics-server-8546d8b77b-dm4n5" [d477f50c-fc79-11eb-9c66-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:06:09.333645  434036 system_pods.go:61] "storage-provisioner" [aaf35a18-fc79-11eb-9c66-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:06:09.333651  434036 system_pods.go:74] duration metric: took 12.999255ms to wait for pod list to return data ...
	I0813 21:06:09.333661  434036 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:06:09.338462  434036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:06:09.338518  434036 node_conditions.go:123] node cpu capacity is 2
	I0813 21:06:09.338577  434036 node_conditions.go:105] duration metric: took 4.908525ms to run NodePressure ...
	I0813 21:06:09.338598  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:09.917816  434036 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:06:09.932993  434036 kubeadm.go:746] kubelet initialised
	I0813 21:06:09.933021  434036 kubeadm.go:747] duration metric: took 15.176768ms waiting for restarted kubelet to initialise ...
	I0813 21:06:09.933032  434036 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:06:09.948756  434036 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:12.000537  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:09.166061  434426 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (7.94326905s)
	I0813 21:06:09.166108  434426 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:06:09.176349  434426 kubeadm.go:746] kubelet initialised
	I0813 21:06:09.176372  434426 kubeadm.go:747] duration metric: took 10.253817ms waiting for restarted kubelet to initialise ...
	I0813 21:06:09.176382  434426 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:06:09.203731  434426 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-f47dd" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:11.323618  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:09.561886  434502 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 21:06:09.561939  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetIP
	I0813 21:06:09.568523  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:06:09.568978  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:06:09.569057  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:06:09.569224  434502 ssh_runner.go:149] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0813 21:06:09.574609  434502 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:06:09.587637  434502 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:06:09.587717  434502 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:06:09.631577  434502 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:06:09.631608  434502 containerd.go:517] Images already preloaded, skipping extraction
	I0813 21:06:09.631667  434502 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:06:09.676879  434502 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:06:09.676920  434502 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:06:09.676977  434502 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:06:09.724335  434502 cni.go:93] Creating CNI manager for ""
	I0813 21:06:09.724374  434502 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:06:09.724388  434502 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:06:09.724408  434502 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.95 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210813210115-393438 NodeName:embed-certs-20210813210115-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.72.95 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:06:09.724698  434502 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210813210115-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:06:09.724884  434502 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210813210115-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.95 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813210115-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:06:09.724996  434502 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 21:06:09.739481  434502 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:06:09.739570  434502 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:06:09.748730  434502 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (548 bytes)
	I0813 21:06:09.765998  434502 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:06:09.782632  434502 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2086 bytes)
	I0813 21:06:09.799961  434502 ssh_runner.go:149] Run: grep 192.168.72.95	control-plane.minikube.internal$ /etc/hosts
	I0813 21:06:09.804846  434502 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:06:09.818569  434502 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438 for IP: 192.168.72.95
	I0813 21:06:09.818631  434502 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:06:09.818655  434502 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:06:09.818749  434502 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/client.key
	I0813 21:06:09.818782  434502 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/apiserver.key.a2bb46f7
	I0813 21:06:09.818808  434502 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/proxy-client.key
	I0813 21:06:09.818950  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:06:09.819005  434502 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:06:09.819020  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:06:09.819060  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:06:09.819095  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:06:09.819129  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:06:09.819196  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:06:09.820623  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:06:09.843444  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:06:09.865562  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:06:09.888012  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:06:09.908367  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:06:09.931692  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:06:09.955112  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:06:09.977220  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:06:09.996735  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:06:10.017440  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:06:10.036588  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:06:10.057107  434502 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:06:10.071022  434502 ssh_runner.go:149] Run: openssl version
	I0813 21:06:10.078646  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:06:10.088543  434502 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:06:10.093262  434502 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:06:10.093314  434502 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:06:10.101242  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:06:10.111569  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:06:10.121721  434502 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:06:10.128059  434502 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:06:10.128117  434502 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:06:10.136057  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:06:10.145811  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:06:10.155469  434502 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:06:10.161508  434502 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:06:10.161558  434502 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:06:10.170155  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:06:10.181353  434502 kubeadm.go:390] StartCluster: {Name:embed-certs-20210813210115-393438 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21
.3 ClusterName:embed-certs-20210813210115-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:06:10.181479  434502 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:06:10.181532  434502 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:06:10.226656  434502 cri.go:76] found id: ""
	I0813 21:06:10.226748  434502 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:06:10.238541  434502 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:06:10.238572  434502 kubeadm.go:600] restartCluster start
	I0813 21:06:10.238631  434502 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:06:10.248775  434502 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:10.250208  434502 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210813210115-393438" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:06:10.250844  434502 kubeconfig.go:128] "embed-certs-20210813210115-393438" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:06:10.251920  434502 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:06:10.255366  434502 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:06:10.266016  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:10.266081  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:10.280298  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:10.480725  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:10.480833  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:10.494154  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:10.681384  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:10.681480  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:10.693111  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:10.880851  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:10.880941  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:10.893836  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.080959  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:11.081031  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:11.091638  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.280934  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:11.281012  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:11.291102  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.481367  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:11.481434  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:11.490618  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.680878  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:11.680955  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:11.690761  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.881063  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:11.881137  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:11.890865  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:12.081085  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:12.081173  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:12.091155  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:12.280366  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:12.280439  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:12.290184  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:12.481389  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:12.481488  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:12.492213  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:12.680455  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:12.680522  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:12.690091  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.762406  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:12.263211  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:12.763026  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:13.262897  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:13.762877  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:14.263085  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:14.762761  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:15.262346  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:15.762737  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:16.262977  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:14.001704  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:16.501470  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.821161  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.821346  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:12.880681  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:12.880762  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:12.893122  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:13.081147  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:13.081221  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:13.090555  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:13.280837  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:13.280909  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:13.291851  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:13.291871  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:13.291911  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:13.300667  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:13.300686  434502 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:06:13.300694  434502 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:06:13.300757  434502 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:06:13.300806  434502 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:06:13.337181  434502 cri.go:76] found id: ""
	I0813 21:06:13.337237  434502 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:06:13.351877  434502 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:06:13.360342  434502 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:06:13.360395  434502 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:06:13.369326  434502 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:06:13.369345  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:13.612776  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:14.263028  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:14.506952  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:14.637981  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:14.761221  434502 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:06:14.761297  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:15.276033  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:15.775557  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:16.275444  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:16.775736  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:17.275892  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:17.775800  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:16.763027  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:17.262268  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:17.762930  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:18.262889  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:18.762606  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:19.262504  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:19.762764  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:20.262424  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:20.763080  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:21.263257  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:18.502462  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.505177  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.322424  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.817641  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.276249  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:18.775759  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:19.276156  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:19.775649  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:20.275540  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:20.776363  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:21.276085  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:21.775549  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:22.275554  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:22.775686  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:21.762982  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:21.785282  434236 api_server.go:70] duration metric: took 48.539827542s to wait for apiserver process to appear ...
	I0813 21:06:21.785310  434236 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:06:21.785322  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:21.785942  434236 api_server.go:255] stopped: https://192.168.39.163:8444/healthz: Get "https://192.168.39.163:8444/healthz": dial tcp 192.168.39.163:8444: connect: connection refused
	I0813 21:06:22.286681  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:26.246872  434236 api_server.go:265] https://192.168.39.163:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:06:26.246922  434236 api_server.go:101] status: https://192.168.39.163:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:06:26.287144  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:26.298988  434236 api_server.go:265] https://192.168.39.163:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:06:26.299009  434236 api_server.go:101] status: https://192.168.39.163:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:06:23.003616  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.004288  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.819140  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.318572  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:26.786868  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:26.793342  434236 api_server.go:265] https://192.168.39.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:06:26.793366  434236 api_server.go:101] status: https://192.168.39.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:06:27.286691  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:27.292784  434236 api_server.go:265] https://192.168.39.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:06:27.292812  434236 api_server.go:101] status: https://192.168.39.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:06:27.786271  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:27.792679  434236 api_server.go:265] https://192.168.39.163:8444/healthz returned 200:
	ok
	I0813 21:06:27.799994  434236 api_server.go:139] control plane version: v1.21.3
	I0813 21:06:27.800012  434236 api_server.go:129] duration metric: took 6.014696522s to wait for apiserver health ...
	I0813 21:06:27.800022  434236 cni.go:93] Creating CNI manager for ""
	I0813 21:06:27.800062  434236 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:06:23.276286  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:23.776368  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:24.276462  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:24.775975  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:25.275957  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:25.775561  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:26.276428  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:26.775547  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:27.275581  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:27.775541  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:27.802064  434236 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:06:27.802121  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:06:27.809702  434236 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:06:27.826306  434236 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:06:27.841765  434236 system_pods.go:59] 8 kube-system pods found
	I0813 21:06:27.841793  434236 system_pods.go:61] "coredns-558bd4d5db-pgvfh" [1a07397c-0aca-43f9-a2b7-36a6b02771a5] Running
	I0813 21:06:27.841801  434236 system_pods.go:61] "etcd-default-k8s-different-port-20210813210121-393438" [7113d745-dc4c-4fce-afc4-d66d374933cb] Running
	I0813 21:06:27.841810  434236 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813210121-393438" [f83aa51e-8e79-4280-8897-8762c33cfc4c] Running
	I0813 21:06:27.841816  434236 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813210121-393438" [4a639055-ca6a-4a71-b697-f77eb4ede3a1] Running
	I0813 21:06:27.841822  434236 system_pods.go:61] "kube-proxy-59w6c" [61f4a377-504a-4826-a20b-3afdcb247fd6] Running
	I0813 21:06:27.841827  434236 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813210121-393438" [6734eb94-94d5-4b97-9f3e-4090a1456e78] Running
	I0813 21:06:27.841838  434236 system_pods.go:61] "metrics-server-7c784ccb57-x428n" [67da7c22-bd45-4039-82bb-40a3de84b60f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:06:27.841848  434236 system_pods.go:61] "storage-provisioner" [f0d06d4f-d8e4-4be1-a716-153b7e89f6e4] Running
	I0813 21:06:27.841856  434236 system_pods.go:74] duration metric: took 15.534941ms to wait for pod list to return data ...
	I0813 21:06:27.841865  434236 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:06:27.846952  434236 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:06:27.846983  434236 node_conditions.go:123] node cpu capacity is 2
	I0813 21:06:27.847036  434236 node_conditions.go:105] duration metric: took 5.164912ms to run NodePressure ...
	I0813 21:06:27.847053  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:28.216153  434236 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:06:28.221385  434236 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0813 21:06:28.586960  434236 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0813 21:06:29.030426  434236 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0813 21:06:29.563354  434236 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0813 21:06:30.351601  434236 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0813 21:06:27.503099  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:30.001034  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:32.001820  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.818355  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.818740  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:31.818830  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:28.276273  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:28.776262  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:29.276050  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:29.776362  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:30.275521  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:30.776092  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:31.276357  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:31.776415  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:32.275529  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:32.776092  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:31.861131  434236 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0813 21:06:32.940988  434236 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0813 21:06:34.820241  434236 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0813 21:06:34.505062  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.505849  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.819176  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:35.820987  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.276254  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:33.776368  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:34.276134  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:34.776202  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:35.275686  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:35.775829  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:36.276559  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:36.776220  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:37.275740  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:37.775808  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:37.376657  434236 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0813 21:06:39.006116  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:41.006424  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:37.829656  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:40.319134  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:38.275515  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:38.776398  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:39.276039  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:39.775524  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:40.275880  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:40.775535  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:41.275713  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:41.776051  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:42.276097  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:42.776117  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:42.515553  434236 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0813 21:06:42.519498  434036 pod_ready.go:92] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.519524  434036 pod_ready.go:81] duration metric: took 32.570739003s waiting for pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.519538  434036 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.534376  434036 pod_ready.go:92] pod "etcd-old-k8s-version-20210813205952-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.534400  434036 pod_ready.go:81] duration metric: took 14.853692ms waiting for pod "etcd-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.534413  434036 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.542051  434036 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210813205952-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.542072  434036 pod_ready.go:81] duration metric: took 7.649854ms waiting for pod "kube-apiserver-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.542085  434036 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.551759  434036 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210813205952-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.551778  434036 pod_ready.go:81] duration metric: took 9.684667ms waiting for pod "kube-controller-manager-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.551790  434036 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zrnsp" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.558134  434036 pod_ready.go:92] pod "kube-proxy-zrnsp" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.558151  434036 pod_ready.go:81] duration metric: took 6.353431ms waiting for pod "kube-proxy-zrnsp" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.558161  434036 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.898183  434036 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210813205952-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.898209  434036 pod_ready.go:81] duration metric: took 340.039042ms waiting for pod "kube-scheduler-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.898220  434036 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:45.308578  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:42.321737  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:44.816826  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:43.276237  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:43.775821  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:44.276105  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:44.775884  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:45.275614  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:45.775627  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:46.276396  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:46.776197  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:47.275874  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:47.776502  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:47.807536  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:50.308695  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.318944  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.821979  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:50.318268  434426 pod_ready.go:92] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.318297  434426 pod_ready.go:81] duration metric: took 41.114530272s waiting for pod "coredns-78fcd69978-f47dd" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.318311  434426 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.325569  434426 pod_ready.go:92] pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.325586  434426 pod_ready.go:81] duration metric: took 7.26781ms waiting for pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.325598  434426 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.334259  434426 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.334302  434426 pod_ready.go:81] duration metric: took 8.696424ms waiting for pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.334315  434426 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.344213  434426 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.344233  434426 pod_ready.go:81] duration metric: took 9.907594ms waiting for pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.344246  434426 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jl8gn" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.365993  434426 pod_ready.go:92] pod "kube-proxy-jl8gn" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.366014  434426 pod_ready.go:81] duration metric: took 21.760778ms waiting for pod "kube-proxy-jl8gn" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.366026  434426 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.713844  434426 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.713867  434426 pod_ready.go:81] duration metric: took 347.831128ms waiting for pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.713880  434426 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:48.275626  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:48.775745  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:49.275564  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:49.775902  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:50.275938  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:50.775797  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:51.275647  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:51.776313  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:52.275795  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:52.775934  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:52.282208  434236 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0813 21:06:52.805669  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:54.807200  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:56.807643  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:53.151574  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:55.621726  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:53.275576  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:53.775720  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:54.276020  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:54.775811  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:55.275566  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:55.775883  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:56.276445  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:56.775931  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:57.276228  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:57.775511  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:58.807964  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.309268  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:57.622203  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:59.624586  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:02.126220  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:58.275484  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:58.776166  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:59.275550  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:59.775542  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:00.275745  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:00.775927  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:01.276255  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:01.776408  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:02.275616  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:02.775862  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:03.807156  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.308695  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:04.130466  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.625720  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:03.276472  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:03.775573  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:04.275590  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:04.776446  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:05.275752  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:05.775925  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:06.276144  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:06.775472  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:07.275532  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:07.775754  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:11.229359  434236 kubeadm.go:746] kubelet initialised
	I0813 21:07:11.229389  434236 kubeadm.go:747] duration metric: took 43.013205159s waiting for restarted kubelet to initialise ...
	I0813 21:07:11.229401  434236 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:11.238577  434236 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-pgvfh" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:11.251014  434236 pod_ready.go:92] pod "coredns-558bd4d5db-pgvfh" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:11.251035  434236 pod_ready.go:81] duration metric: took 12.426366ms waiting for pod "coredns-558bd4d5db-pgvfh" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:11.251047  434236 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:11.255674  434236 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:11.255691  434236 pod_ready.go:81] duration metric: took 4.635703ms waiting for pod "etcd-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:11.255706  434236 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:08.806115  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:10.807534  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:09.120975  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:11.123527  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:08.276185  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:08.776211  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:09.276442  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:09.775975  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:09.790545  434502 api_server.go:70] duration metric: took 55.029322515s to wait for apiserver process to appear ...
	I0813 21:07:09.790626  434502 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:07:09.790646  434502 api_server.go:239] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0813 21:07:09.792152  434502 api_server.go:255] stopped: https://192.168.72.95:8443/healthz: Get "https://192.168.72.95:8443/healthz": dial tcp 192.168.72.95:8443: connect: connection refused
	I0813 21:07:10.293011  434502 api_server.go:239] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0813 21:07:14.475767  434502 api_server.go:265] https://192.168.72.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:07:14.475812  434502 api_server.go:101] status: https://192.168.72.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:07:14.793218  434502 api_server.go:239] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0813 21:07:14.798556  434502 api_server.go:265] https://192.168.72.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:07:14.798580  434502 api_server.go:101] status: https://192.168.72.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:07:15.292846  434502 api_server.go:239] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0813 21:07:15.300973  434502 api_server.go:265] https://192.168.72.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:07:15.300998  434502 api_server.go:101] status: https://192.168.72.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:07:15.792511  434502 api_server.go:239] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0813 21:07:15.804455  434502 api_server.go:265] https://192.168.72.95:8443/healthz returned 200:
	ok
	I0813 21:07:15.814303  434502 api_server.go:139] control plane version: v1.21.3
	I0813 21:07:15.814327  434502 api_server.go:129] duration metric: took 6.02368902s to wait for apiserver health ...
	I0813 21:07:15.814340  434502 cni.go:93] Creating CNI manager for ""
	I0813 21:07:15.814348  434502 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:07:13.272296  434236 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:14.269360  434236 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:14.269392  434236 pod_ready.go:81] duration metric: took 3.01367661s waiting for pod "kube-apiserver-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:14.269407  434236 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:16.284244  434236 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.309553  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.808412  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.124394  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.136794  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.815910  434502 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:07:15.815980  434502 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:07:15.824375  434502 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:07:15.839198  434502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:07:15.859757  434502 system_pods.go:59] 8 kube-system pods found
	I0813 21:07:15.859784  434502 system_pods.go:61] "coredns-558bd4d5db-pt8qp" [4b80cbcb-3806-4176-9407-d1052b959548] Running
	I0813 21:07:15.859789  434502 system_pods.go:61] "etcd-embed-certs-20210813210115-393438" [73d1aa71-d312-44ce-aa0e-c6d79153b7c5] Running
	I0813 21:07:15.859794  434502 system_pods.go:61] "kube-apiserver-embed-certs-20210813210115-393438" [28e2fb79-3ee9-4880-a26e-231c3f384c1c] Running
	I0813 21:07:15.859798  434502 system_pods.go:61] "kube-controller-manager-embed-certs-20210813210115-393438" [8cd94f0c-7cc5-43af-91e1-86960d354db9] Running
	I0813 21:07:15.859805  434502 system_pods.go:61] "kube-proxy-kjphp" [38a3daef-9d16-4d30-a285-859858eb75fb] Running
	I0813 21:07:15.859811  434502 system_pods.go:61] "kube-scheduler-embed-certs-20210813210115-393438" [b7048830-cba7-4f74-9143-0df360f72f9d] Running
	I0813 21:07:15.859823  434502 system_pods.go:61] "metrics-server-7c784ccb57-8nk4r" [866b08d2-ac27-4bea-a139-ef4bd73f01c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:07:15.859841  434502 system_pods.go:61] "storage-provisioner" [7bf768d3-0513-4a9d-a42f-632676795045] Running
	I0813 21:07:15.859848  434502 system_pods.go:74] duration metric: took 20.635509ms to wait for pod list to return data ...
	I0813 21:07:15.859859  434502 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:07:15.863654  434502 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:07:15.863685  434502 node_conditions.go:123] node cpu capacity is 2
	I0813 21:07:15.863703  434502 node_conditions.go:105] duration metric: took 3.834933ms to run NodePressure ...
	I0813 21:07:15.863721  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:07:16.193321  434502 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:07:16.199509  434502 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0813 21:07:16.566486  434502 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0813 21:07:17.014911  434502 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0813 21:07:17.548331  434502 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0813 21:07:18.284613  434236 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:18.784551  434236 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:18.784587  434236 pod_ready.go:81] duration metric: took 4.515169958s waiting for pod "kube-controller-manager-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:18.784605  434236 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-59w6c" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:18.789934  434236 pod_ready.go:92] pod "kube-proxy-59w6c" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:18.789957  434236 pod_ready.go:81] duration metric: took 5.344262ms waiting for pod "kube-proxy-59w6c" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:18.789969  434236 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:18.795694  434236 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:18.795714  434236 pod_ready.go:81] duration metric: took 5.73581ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:18.795724  434236 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:20.811739  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:18.306423  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.311424  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:17.625434  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.132243  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:18.336752  434502 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0813 21:07:19.861088  434502 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0813 21:07:20.942258  434502 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0813 21:07:22.818506  434502 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0813 21:07:23.311525  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:25.314360  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.806784  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:25.308634  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.624709  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:24.625595  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:27.120921  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:25.380548  434502 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0813 21:07:27.318633  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.812304  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:27.808395  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.808591  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.126100  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:31.621750  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:30.518560  434502 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0813 21:07:31.812364  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.813572  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.311298  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:32.307070  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:34.319179  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.808675  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.622811  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.126056  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.311412  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.312272  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.809354  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:41.307236  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.126184  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.128899  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.283114  434502 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0813 21:07:42.810757  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.310911  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:43.808043  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:46.305820  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:42.621376  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.122604  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:47.124604  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:47.312196  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.315617  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:48.307988  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:50.309212  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.621521  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:51.624309  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:51.810709  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:53.823425  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:56.311568  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:52.807052  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:55.305029  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:54.122426  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:56.125762  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:58.812506  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:01.313636  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:57.308380  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.809491  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:58.130929  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:00.133009  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.231806  434502 kubeadm.go:746] kubelet initialised
	I0813 21:07:59.231828  434502 kubeadm.go:747] duration metric: took 43.038475839s waiting for restarted kubelet to initialise ...
	I0813 21:07:59.231845  434502 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:59.239711  434502 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:01.276611  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.812222  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:06.310423  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:02.308286  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:04.308778  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:06.805247  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:02.621598  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:04.622355  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:06.622525  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.277406  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.783084  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.310901  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:10.312505  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.805886  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:10.807381  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.623877  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:11.122714  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.277357  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:10.281005  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:12.774931  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:12.809835  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:14.814406  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.307768  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.309217  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.123158  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.620492  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:14.775335  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.275884  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.311309  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.311627  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.806855  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.807892  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:21.808757  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.638499  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:20.126993  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:22.128857  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.276049  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:21.280006  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:21.810244  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:23.810572  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:25.812501  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.308611  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:26.805692  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.627607  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:27.122261  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:23.775905  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:26.276141  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:28.310324  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.312054  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:28.806282  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:31.307435  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:29.129014  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:31.622549  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:28.284915  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.774964  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:32.775965  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:32.809772  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:34.810265  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:33.809848  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:36.305688  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:34.130011  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:36.622943  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:34.781489  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:37.275186  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:36.811560  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:38.813295  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:40.815292  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:38.308742  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:40.805956  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:38.623795  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:41.123452  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:38.275332  434502 pod_ready.go:92] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.275365  434502 pod_ready.go:81] duration metric: took 39.035626612s waiting for pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.275379  434502 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.281005  434502 pod_ready.go:92] pod "etcd-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.281022  434502 pod_ready.go:81] duration metric: took 5.63355ms waiting for pod "etcd-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.281034  434502 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.287755  434502 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.287768  434502 pod_ready.go:81] duration metric: took 6.727485ms waiting for pod "kube-apiserver-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.287777  434502 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.295981  434502 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.295996  434502 pod_ready.go:81] duration metric: took 8.211264ms waiting for pod "kube-controller-manager-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.296006  434502 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kjphp" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.302322  434502 pod_ready.go:92] pod "kube-proxy-kjphp" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.302358  434502 pod_ready.go:81] duration metric: took 6.3457ms waiting for pod "kube-proxy-kjphp" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.302369  434502 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.674125  434502 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.674147  434502 pod_ready.go:81] duration metric: took 371.76822ms waiting for pod "kube-scheduler-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.674161  434502 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:41.079821  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:43.310251  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:45.311095  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:42.808973  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:44.809136  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:43.623697  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:46.127208  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:43.080389  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:45.583078  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:47.586577  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:47.811216  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:49.811374  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:47.306709  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:49.310656  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:51.807825  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:48.133154  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:50.136763  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:50.079908  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:52.083234  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:51.811653  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:54.315282  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:53.812570  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:56.307726  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:52.622320  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:54.624020  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:57.125236  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:54.580777  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:56.581624  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:56.812257  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:59.311620  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:01.312173  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:58.310307  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:00.805499  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:59.126399  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:01.622100  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:59.080603  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:01.081108  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:03.817167  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:06.309418  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:02.807319  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:05.309772  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:04.130586  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:06.621919  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:03.087920  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:05.579200  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:07.579537  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:08.317286  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.811532  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:07.807745  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.306939  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:08.622411  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.626074  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.082379  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:12.086276  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:13.311319  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:15.311868  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:12.308511  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:14.806605  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:16.807320  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:13.122716  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:15.622820  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:14.583833  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:17.085575  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:17.811201  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:20.312158  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:19.307729  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.308201  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:17.623511  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:19.630537  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:22.126818  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:19.580074  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.581281  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:22.313523  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:24.809941  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:23.311738  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:25.808963  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:24.128731  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:26.620247  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:23.581660  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:26.081792  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:26.811628  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:29.311318  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:28.307368  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:30.808816  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:28.622166  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:30.622845  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:28.584687  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:31.080501  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:31.812694  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:34.313072  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:33.307792  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:35.308295  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:33.121915  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:35.122855  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:37.123919  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:33.582420  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:36.081160  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:36.812368  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:39.313851  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:37.308725  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:39.309348  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:41.809891  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:39.124768  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:41.125582  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:38.082318  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:40.580574  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:42.589121  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:41.810962  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:44.310310  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:46.311501  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:44.306795  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:46.309861  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:43.128388  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:45.623794  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:45.079577  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:47.081075  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:48.315701  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:50.811121  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:48.811244  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:51.308695  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:48.127029  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:50.132607  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:49.084131  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:51.085632  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:53.313563  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:55.812263  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:53.806483  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:56.308595  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:52.621447  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:54.622461  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:56.622576  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:53.581563  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:55.584272  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:58.311305  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:00.312205  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:58.808808  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:01.310736  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:59.122709  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:01.183950  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:58.082114  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:00.580651  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:02.582962  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:02.812266  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:04.813638  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:03.810231  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:06.307885  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:03.628234  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:06.128265  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:04.583232  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:07.087258  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:07.313525  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:09.812384  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:08.308710  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:10.807681  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:08.623417  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:11.120889  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:09.087903  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:11.579908  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:11.815424  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:14.311027  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:12.808163  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:15.307806  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:13.124839  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:15.622827  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:13.584071  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:15.644677  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:16.812994  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:18.813605  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:21.311156  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:17.808726  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:19.808863  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:18.128710  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:20.622257  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:18.080644  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:20.086210  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:22.579739  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:23.312656  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:25.811861  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:22.309509  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:24.810625  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:22.626308  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:25.131799  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:24.581224  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:27.081507  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:28.311799  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:30.818179  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:27.306016  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:29.307701  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:31.807555  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:27.623183  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:29.624183  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:32.131508  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:29.083248  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:31.586828  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:33.310454  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:35.311492  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:34.307390  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:36.308357  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:34.622202  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:36.622387  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:33.591286  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:36.081362  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:37.809092  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:39.810884  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:38.807809  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:40.808110  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:39.128048  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:41.623162  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:38.083908  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:40.584247  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:41.811080  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:43.815387  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:46.310171  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:43.299572  434036 pod_ready.go:81] duration metric: took 4m0.401335464s waiting for pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace to be "Ready" ...
	E0813 21:10:43.299598  434036 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:10:43.299620  434036 pod_ready.go:38] duration metric: took 4m33.366575794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:43.299662  434036 kubeadm.go:604] restartCluster took 5m40.655534371s
	W0813 21:10:43.299904  434036 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:10:43.300024  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 21:10:46.304680  434036 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.004628264s)
	I0813 21:10:46.304745  434036 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:10:46.318447  434036 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:10:46.318523  434036 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:10:46.352096  434036 cri.go:76] found id: "f57d117554fe3223ad39b5aaa25d48ea6cc1db88b62c7dc8ca31efbff358f0f7"
	I0813 21:10:46.352124  434036 cri.go:76] found id: "b0b0d0c50df023fb7aa8711c6e6a8a073522ac78bf040db5cf50faee00f31010"
	I0813 21:10:46.352131  434036 cri.go:76] found id: ""
	W0813 21:10:46.352140  434036 kubeadm.go:840] found 2 kube-system containers to stop
	I0813 21:10:46.352190  434036 cri.go:221] Stopping containers: [f57d117554fe3223ad39b5aaa25d48ea6cc1db88b62c7dc8ca31efbff358f0f7 b0b0d0c50df023fb7aa8711c6e6a8a073522ac78bf040db5cf50faee00f31010]
	I0813 21:10:46.352250  434036 ssh_runner.go:149] Run: which crictl
	I0813 21:10:46.356519  434036 ssh_runner.go:149] Run: sudo /bin/crictl stop f57d117554fe3223ad39b5aaa25d48ea6cc1db88b62c7dc8ca31efbff358f0f7 b0b0d0c50df023fb7aa8711c6e6a8a073522ac78bf040db5cf50faee00f31010
	I0813 21:10:46.389495  434036 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:10:46.397140  434036 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:10:46.406441  434036 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:10:46.406489  434036 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0813 21:10:46.994208  434036 out.go:204]   - Generating certificates and keys ...
	I0813 21:10:44.137716  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:46.623167  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:43.080064  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:45.080782  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:47.085711  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:48.311498  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:50.322271  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:48.095926  434036 out.go:204]   - Booting up control plane ...
	I0813 21:10:48.623391  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:50.624889  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:51.113278  434426 pod_ready.go:81] duration metric: took 4m0.399380697s waiting for pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace to be "Ready" ...
	E0813 21:10:51.113311  434426 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:10:51.113332  434426 pod_ready.go:38] duration metric: took 4m41.936938903s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:51.113362  434426 kubeadm.go:604] restartCluster took 5m11.230222626s
	W0813 21:10:51.113488  434426 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:10:51.113526  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 21:10:49.584163  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:52.081779  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:54.882001  434426 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.768438465s)
	I0813 21:10:54.882088  434426 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:10:54.898511  434426 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:10:54.898580  434426 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:10:54.936468  434426 cri.go:76] found id: ""
	I0813 21:10:54.936558  434426 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:10:54.945597  434426 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:10:54.953577  434426 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:10:54.953617  434426 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:10:55.502804  434426 out.go:204]   - Generating certificates and keys ...
	I0813 21:10:52.813241  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:54.814328  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:56.527151  434426 out.go:204]   - Booting up control plane ...
	I0813 21:10:54.088351  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:56.582557  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:00.186147  434036 out.go:204]   - Configuring RBAC rules ...
	I0813 21:11:00.630583  434036 cni.go:93] Creating CNI manager for ""
	I0813 21:11:00.630631  434036 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:10:57.310913  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:59.811517  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:00.632424  434036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:11:00.632506  434036 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:11:00.641928  434036 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:11:00.656175  434036 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:11:00.656236  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:00.656255  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=old-k8s-version-20210813205952-393438 minikube.k8s.io/updated_at=2021_08_13T21_11_00_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:01.059199  434036 ops.go:34] apiserver oom_adj: 16
	I0813 21:11:01.059224  434036 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 21:11:01.059191  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:01.059238  434036 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:11:01.674772  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:58.584725  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:01.081620  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:02.311564  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:04.313948  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:02.174386  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:02.674774  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:03.174370  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:03.675153  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:04.174246  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:04.674985  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:05.174503  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:05.675000  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:06.174971  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:06.674929  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:03.582192  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:06.081634  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:06.814844  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:09.310738  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:11.313358  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:07.174993  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:07.675061  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:08.174860  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:08.674596  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:09.175181  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:09.674238  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:10.174232  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:10.674797  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:11.174169  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:11.675156  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:08.580977  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:10.584133  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:12.584516  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:13.172343  434426 out.go:204]   - Configuring RBAC rules ...
	I0813 21:11:13.720996  434426 cni.go:93] Creating CNI manager for ""
	I0813 21:11:13.721025  434426 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:11:12.174219  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:12.674355  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:13.174223  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:13.675187  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:14.175037  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:14.674721  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.174792  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.674194  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.922458  434036 kubeadm.go:985] duration metric: took 15.266270548s to wait for elevateKubeSystemPrivileges.
	I0813 21:11:15.922496  434036 kubeadm.go:392] StartCluster complete in 6m13.317137577s
	I0813 21:11:15.922521  434036 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:15.922651  434036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:11:15.924691  434036 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:16.472904  434036 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210813205952-393438" rescaled to 1
	I0813 21:11:16.473031  434036 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.83.180 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0813 21:11:16.473054  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:11:16.474575  434036 out.go:177] * Verifying Kubernetes components...
	I0813 21:11:16.473151  434036 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:11:16.474654  434036 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:16.474725  434036 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210813205952-393438"
	I0813 21:11:16.474749  434036 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210813205952-393438"
	W0813 21:11:16.474757  434036 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:11:16.474757  434036 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210813205952-393438"
	I0813 21:11:16.474778  434036 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210813205952-393438"
	W0813 21:11:16.474787  434036 addons.go:147] addon dashboard should already be in state true
	I0813 21:11:16.474791  434036 host.go:66] Checking if "old-k8s-version-20210813205952-393438" exists ...
	I0813 21:11:16.474823  434036 host.go:66] Checking if "old-k8s-version-20210813205952-393438" exists ...
	I0813 21:11:16.473340  434036 config.go:177] Loaded profile config "old-k8s-version-20210813205952-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 21:11:16.475008  434036 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210813205952-393438"
	I0813 21:11:16.475041  434036 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210813205952-393438"
	W0813 21:11:16.475053  434036 addons.go:147] addon metrics-server should already be in state true
	I0813 21:11:16.475079  434036 host.go:66] Checking if "old-k8s-version-20210813205952-393438" exists ...
	I0813 21:11:16.475401  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.475459  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.475505  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.475545  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.475769  434036 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210813205952-393438"
	I0813 21:11:16.475844  434036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210813205952-393438"
	I0813 21:11:16.476015  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.476055  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.476272  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.476315  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.497442  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0813 21:11:16.497642  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43165
	I0813 21:11:16.497988  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.498021  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0813 21:11:16.498283  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.498412  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.498559  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0813 21:11:16.498598  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.498613  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.498761  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.498779  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.498860  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.498920  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.498942  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.498995  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.499297  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.499341  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.499431  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.499455  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.499592  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.499635  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.499788  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.499949  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetState
	I0813 21:11:16.499981  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.500022  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.500027  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.500054  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.514821  434036 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210813205952-393438"
	W0813 21:11:16.514848  434036 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:11:16.514875  434036 host.go:66] Checking if "old-k8s-version-20210813205952-393438" exists ...
	I0813 21:11:16.515287  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.515325  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.515556  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0813 21:11:16.515581  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42801
	I0813 21:11:16.515557  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43087
	I0813 21:11:16.516056  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.516149  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.516217  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.516575  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.516594  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.516709  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.516728  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.517055  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.517058  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.517332  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetState
	I0813 21:11:16.517382  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetState
	I0813 21:11:16.517392  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.517409  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.517811  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.518005  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetState
	I0813 21:11:16.523868  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .DriverName
	I0813 21:11:16.524072  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .DriverName
	I0813 21:11:16.525915  434036 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:11:16.527474  434036 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:11:16.524496  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .DriverName
	I0813 21:11:16.527547  434036 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:11:16.527559  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:11:16.527584  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHHostname
	I0813 21:11:16.529027  434036 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:11:16.529080  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:11:16.529091  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:11:16.529110  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHHostname
	I0813 21:11:13.813030  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:16.311896  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:16.530900  434036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:11:16.529862  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33769
	I0813 21:11:16.531012  434036 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:11:16.531026  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:11:16.531044  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHHostname
	I0813 21:11:16.531462  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.532183  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.532200  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.532644  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.533900  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.533944  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.537401  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.537962  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.538641  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:3b:5e", ip: ""} in network mk-old-k8s-version-20210813205952-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:04:38 +0000 UTC Type:0 Mac:52:54:00:55:3b:5e Iaid: IPaddr:192.168.83.180 Prefix:24 Hostname:old-k8s-version-20210813205952-393438 Clientid:01:52:54:00:55:3b:5e}
	I0813 21:11:16.538693  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined IP address 192.168.83.180 and MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.538805  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:3b:5e", ip: ""} in network mk-old-k8s-version-20210813205952-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:04:38 +0000 UTC Type:0 Mac:52:54:00:55:3b:5e Iaid: IPaddr:192.168.83.180 Prefix:24 Hostname:old-k8s-version-20210813205952-393438 Clientid:01:52:54:00:55:3b:5e}
	I0813 21:11:16.538842  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined IP address 192.168.83.180 and MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.538873  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHPort
	I0813 21:11:16.539033  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHKeyPath
	I0813 21:11:16.539211  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHUsername
	I0813 21:11:16.539315  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHPort
	I0813 21:11:16.539359  434036 sshutil.go:53] new ssh client: &{IP:192.168.83.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205952-393438/id_rsa Username:docker}
	I0813 21:11:16.539691  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHKeyPath
	I0813 21:11:16.539853  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHUsername
	I0813 21:11:16.539998  434036 sshutil.go:53] new ssh client: &{IP:192.168.83.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205952-393438/id_rsa Username:docker}
	I0813 21:11:16.540454  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.540864  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:3b:5e", ip: ""} in network mk-old-k8s-version-20210813205952-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:04:38 +0000 UTC Type:0 Mac:52:54:00:55:3b:5e Iaid: IPaddr:192.168.83.180 Prefix:24 Hostname:old-k8s-version-20210813205952-393438 Clientid:01:52:54:00:55:3b:5e}
	I0813 21:11:16.540899  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined IP address 192.168.83.180 and MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.541064  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHPort
	I0813 21:11:16.541224  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHKeyPath
	I0813 21:11:16.541450  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHUsername
	I0813 21:11:16.541603  434036 sshutil.go:53] new ssh client: &{IP:192.168.83.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205952-393438/id_rsa Username:docker}
	I0813 21:11:16.547557  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37251
	I0813 21:11:16.547929  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.548399  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.548424  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.548756  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.548937  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetState
	I0813 21:11:16.551474  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .DriverName
	I0813 21:11:16.551673  434036 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:11:16.551689  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:11:16.551707  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHHostname
	I0813 21:11:16.556960  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.557325  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:3b:5e", ip: ""} in network mk-old-k8s-version-20210813205952-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:04:38 +0000 UTC Type:0 Mac:52:54:00:55:3b:5e Iaid: IPaddr:192.168.83.180 Prefix:24 Hostname:old-k8s-version-20210813205952-393438 Clientid:01:52:54:00:55:3b:5e}
	I0813 21:11:16.557358  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined IP address 192.168.83.180 and MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.557473  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHPort
	I0813 21:11:16.557621  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHKeyPath
	I0813 21:11:16.557777  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHUsername
	I0813 21:11:16.557905  434036 sshutil.go:53] new ssh client: &{IP:192.168.83.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205952-393438/id_rsa Username:docker}
	I0813 21:11:16.891070  434036 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:11:16.891095  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:11:16.921005  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:11:16.921029  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:11:16.958755  434036 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210813205952-393438" to be "Ready" ...
	I0813 21:11:16.958833  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:11:16.961190  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:11:16.961221  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:11:16.966347  434036 node_ready.go:49] node "old-k8s-version-20210813205952-393438" has status "Ready":"True"
	I0813 21:11:16.966365  434036 node_ready.go:38] duration metric: took 7.580764ms waiting for node "old-k8s-version-20210813205952-393438" to be "Ready" ...
	I0813 21:11:16.966379  434036 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:11:16.972401  434036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:16.995238  434036 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:11:17.006124  434036 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:11:17.025333  434036 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:11:17.025360  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:11:17.027216  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:11:17.027236  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:11:13.722715  434426 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:11:13.722800  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:11:13.734030  434426 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:11:13.750877  434426 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:11:13.750976  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:13.750976  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=no-preload-20210813210044-393438 minikube.k8s.io/updated_at=2021_08_13T21_11_13_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:13.806850  434426 ops.go:34] apiserver oom_adj: -16
	I0813 21:11:14.073328  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:14.667824  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.168149  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.667995  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:16.167441  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:16.667309  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.082786  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:17.586523  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:17.111367  434036 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:11:17.111398  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:11:17.118314  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:11:17.118335  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:11:17.149133  434036 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:11:17.162658  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:11:17.162693  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:11:17.234536  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:11:17.234569  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:11:17.295123  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:11:17.295156  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:11:17.482125  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:11:17.482162  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:11:17.677419  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:11:17.677447  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:11:17.757023  434036 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:11:17.960379  434036 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.001504607s)
	I0813 21:11:17.960480  434036 start.go:728] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS
	I0813 21:11:18.539601  434036 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.544332297s)
	I0813 21:11:18.539651  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.539654  434036 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.533491345s)
	I0813 21:11:18.539692  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.539666  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.539715  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.539988  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.540005  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.540013  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.540022  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.540110  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | Closing plugin on server side
	I0813 21:11:18.540198  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.540213  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.540222  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.540238  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.540266  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.540277  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | Closing plugin on server side
	I0813 21:11:18.540278  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.540299  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.540242  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.540531  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | Closing plugin on server side
	I0813 21:11:18.540593  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | Closing plugin on server side
	I0813 21:11:18.540647  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.540664  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.540692  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.540703  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.891875  434036 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742689541s)
	I0813 21:11:18.891930  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.891957  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.892355  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.892377  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.892389  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.892399  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.892625  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.892635  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.892645  434036 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210813205952-393438"
	I0813 21:11:19.031254  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:19.828027  434036 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.070944239s)
	I0813 21:11:19.828086  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:19.828101  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:19.828430  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:19.828452  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | Closing plugin on server side
	I0813 21:11:19.828452  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:19.828488  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:19.828501  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:19.828750  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:19.828768  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.311970  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:18.803484  434236 pod_ready.go:81] duration metric: took 4m0.007742732s waiting for pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace to be "Ready" ...
	E0813 21:11:18.803527  434236 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:11:18.803553  434236 pod_ready.go:38] duration metric: took 4m7.574137981s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:11:18.803589  434236 kubeadm.go:604] restartCluster took 5m50.491873522s
	W0813 21:11:18.803752  434236 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:11:18.803790  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 21:11:19.830658  434036 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 21:11:19.830710  434036 addons.go:344] enableAddons completed in 3.357568207s
	I0813 21:11:21.074595  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:17.167975  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:17.668129  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:18.167259  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:18.667894  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:19.167326  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:19.667247  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:20.167336  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:20.667616  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:21.167732  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:21.667655  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:19.587857  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:22.083079  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:22.321035  434236 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.517214272s)
	I0813 21:11:22.321114  434236 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:11:22.336500  434236 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:11:22.336600  434236 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:11:22.381833  434236 cri.go:76] found id: ""
	I0813 21:11:22.381921  434236 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:11:22.390007  434236 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:11:22.402520  434236 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:11:22.402571  434236 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:11:22.966621  434236 out.go:204]   - Generating certificates and keys ...
	I0813 21:11:23.985069  434236 out.go:204]   - Booting up control plane ...
	I0813 21:11:23.486070  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:25.488157  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:22.167466  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:22.668165  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:23.167304  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:23.667348  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:24.167472  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:24.667538  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:25.167599  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:25.667365  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:26.167194  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:26.667310  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:27.167160  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:27.282634  434426 kubeadm.go:985] duration metric: took 13.531711919s to wait for elevateKubeSystemPrivileges.
	I0813 21:11:27.282691  434426 kubeadm.go:392] StartCluster complete in 5m47.489271406s
	I0813 21:11:27.282716  434426 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:27.282848  434426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:11:27.284813  434426 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:27.814838  434426 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210813210044-393438" rescaled to 1
	I0813 21:11:27.814916  434426 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.61.54 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:11:27.814960  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:11:24.580604  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:26.581927  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:27.816918  434426 out.go:177] * Verifying Kubernetes components...
	I0813 21:11:27.816991  434426 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:27.815020  434426 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:11:27.817075  434426 addons.go:59] Setting dashboard=true in profile "no-preload-20210813210044-393438"
	I0813 21:11:27.817089  434426 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210813210044-393438"
	I0813 21:11:27.817094  434426 addons.go:135] Setting addon dashboard=true in "no-preload-20210813210044-393438"
	I0813 21:11:27.815219  434426 config.go:177] Loaded profile config "no-preload-20210813210044-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:11:27.817105  434426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210813210044-393438"
	I0813 21:11:27.817111  434426 addons.go:59] Setting metrics-server=true in profile "no-preload-20210813210044-393438"
	I0813 21:11:27.817138  434426 addons.go:135] Setting addon metrics-server=true in "no-preload-20210813210044-393438"
	I0813 21:11:27.817076  434426 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210813210044-393438"
	W0813 21:11:27.817150  434426 addons.go:147] addon metrics-server should already be in state true
	I0813 21:11:27.817165  434426 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210813210044-393438"
	W0813 21:11:27.817177  434426 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:11:27.817202  434426 host.go:66] Checking if "no-preload-20210813210044-393438" exists ...
	I0813 21:11:27.817218  434426 host.go:66] Checking if "no-preload-20210813210044-393438" exists ...
	W0813 21:11:27.817102  434426 addons.go:147] addon dashboard should already be in state true
	I0813 21:11:27.817286  434426 host.go:66] Checking if "no-preload-20210813210044-393438" exists ...
	I0813 21:11:27.817568  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.817609  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.817638  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.817667  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.817735  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.817770  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.817785  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.817803  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.829240  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0813 21:11:27.829663  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.830228  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.830249  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.830834  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.831042  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:11:27.833857  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36095
	I0813 21:11:27.834306  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.834848  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.834868  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.835441  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.835990  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.836027  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.836766  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34007
	I0813 21:11:27.837138  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.837429  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0813 21:11:27.837624  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.837643  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.837784  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.837987  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.838257  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.838272  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.838627  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.838727  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.838777  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.839268  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.839313  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.854776  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0813 21:11:27.854789  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0813 21:11:27.854873  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0813 21:11:27.855191  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.855405  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.855681  434426 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210813210044-393438"
	W0813 21:11:27.855703  434426 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:11:27.855732  434426 host.go:66] Checking if "no-preload-20210813210044-393438" exists ...
	I0813 21:11:27.855782  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.855807  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.855876  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.855892  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.856153  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.856175  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.856191  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.856214  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.856359  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:11:27.856382  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:11:27.857051  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.857489  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.857514  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.857874  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.858051  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:11:27.861869  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:11:27.862110  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:11:27.863672  434426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:11:27.865163  434426 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:11:27.865222  434426 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:11:27.863060  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:11:27.865237  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:11:27.865259  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:11:27.863777  434426 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:11:27.865299  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:11:27.865322  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:11:27.867484  434426 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:11:27.987850  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:30.491230  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:27.869002  434426 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:11:27.869069  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:11:27.869084  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:11:27.869103  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:11:27.869712  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0813 21:11:27.870137  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.870637  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.870661  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.871139  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.871751  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.871808  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.872140  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.873481  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:11:27.873515  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.873685  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:11:27.873901  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:11:27.874088  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:11:27.874251  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:11:27.874538  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.875107  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:11:27.875132  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.875322  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:11:27.875465  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:11:27.875608  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:11:27.875702  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:11:27.877188  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.877589  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:11:27.877620  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.877775  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:11:27.877959  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:11:27.878119  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:11:27.878270  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:11:27.883439  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0813 21:11:27.883840  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.884297  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.884322  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.884659  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.884850  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:11:27.887864  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:11:27.888070  434426 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:11:27.888087  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:11:27.888105  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:11:27.893121  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.893492  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:11:27.893516  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.893656  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:11:27.893808  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:11:27.893979  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:11:27.894145  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:11:28.140814  434426 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:11:28.140836  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:11:28.308462  434426 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:11:28.308490  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:11:28.312755  434426 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:11:28.316440  434426 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210813210044-393438" to be "Ready" ...
	I0813 21:11:28.316649  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:11:28.320165  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:11:28.320188  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:11:28.323008  434426 node_ready.go:49] node "no-preload-20210813210044-393438" has status "Ready":"True"
	I0813 21:11:28.323025  434426 node_ready.go:38] duration metric: took 6.554015ms waiting for node "no-preload-20210813210044-393438" to be "Ready" ...
	I0813 21:11:28.323037  434426 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:11:28.335561  434426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:28.384581  434426 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:11:28.455637  434426 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:11:28.455676  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:11:28.465004  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:11:28.465031  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:11:28.656110  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:11:28.656140  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:11:28.701795  434426 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:11:28.916338  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:11:28.916368  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:11:29.075737  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:11:29.075769  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:11:29.146984  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:11:29.147013  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:11:29.358157  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:11:29.358203  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:11:29.917909  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:11:29.917935  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:11:29.986046  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:11:29.986075  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:11:30.122849  434426 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:11:30.364045  434426 pod_ready.go:102] pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:30.578023  434426 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.261329708s)
	I0813 21:11:30.578056  434426 start.go:728] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS
	I0813 21:11:30.578249  434426 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.265452722s)
	I0813 21:11:30.578304  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:30.578324  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:30.578634  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:30.578652  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:30.578680  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:30.578694  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:30.580051  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Closing plugin on server side
	I0813 21:11:30.580102  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:30.580128  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:30.580153  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:30.580167  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:30.580440  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:30.580459  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:30.746411  434426 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.361788474s)
	I0813 21:11:30.746464  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:30.746478  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:30.746827  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Closing plugin on server side
	I0813 21:11:30.746890  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:30.746915  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:30.746939  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:30.746955  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:30.747238  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Closing plugin on server side
	I0813 21:11:30.747278  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:30.747288  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:31.414876  434426 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.713031132s)
	I0813 21:11:31.414948  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:31.414972  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:31.415309  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Closing plugin on server side
	I0813 21:11:31.415443  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:31.415475  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:31.415495  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:31.415516  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:31.416949  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Closing plugin on server side
	I0813 21:11:31.416967  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:31.416984  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:31.416997  434426 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210813210044-393438"
	I0813 21:11:32.305740  434426 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.182835966s)
	I0813 21:11:32.305798  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:32.305817  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:32.306117  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:32.306138  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:32.306150  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:32.306161  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:32.307516  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:32.307583  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:29.082122  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:31.084872  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:32.988501  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:35.489902  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:32.309263  434426 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 21:11:32.309287  434426 addons.go:344] enableAddons completed in 4.494276897s
	I0813 21:11:32.858825  434426 pod_ready.go:102] pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:34.860075  434426 pod_ready.go:102] pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:36.860345  434426 pod_ready.go:102] pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:33.584913  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:35.593941  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:40.171000  434236 out.go:204]   - Configuring RBAC rules ...
	I0813 21:11:40.713714  434236 cni.go:93] Creating CNI manager for ""
	I0813 21:11:40.713746  434236 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:11:40.715369  434236 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:11:40.715459  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:11:40.728777  434236 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:11:40.756822  434236 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:11:40.756935  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813210121-393438 minikube.k8s.io/updated_at=2021_08_13T21_11_40_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:40.756935  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:41.175747  434236 ops.go:34] apiserver oom_adj: -16
	I0813 21:11:41.176252  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:37.986288  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:39.987469  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:38.865021  434426 pod_ready.go:102] pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:40.854753  434426 pod_ready.go:97] error getting pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-2kv7b" not found
	I0813 21:11:40.854790  434426 pod_ready.go:81] duration metric: took 12.519201094s waiting for pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace to be "Ready" ...
	E0813 21:11:40.854805  434426 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-2kv7b" not found
	I0813 21:11:40.854816  434426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-r4dmk" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.864186  434426 pod_ready.go:92] pod "coredns-78fcd69978-r4dmk" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:40.864202  434426 pod_ready.go:81] duration metric: took 9.379202ms waiting for pod "coredns-78fcd69978-r4dmk" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.864211  434426 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.871022  434426 pod_ready.go:92] pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:40.871041  434426 pod_ready.go:81] duration metric: took 6.824229ms waiting for pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.871051  434426 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.878077  434426 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:40.878097  434426 pod_ready.go:81] duration metric: took 7.039745ms waiting for pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.878109  434426 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.896064  434426 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:40.896083  434426 pod_ready.go:81] duration metric: took 17.966303ms waiting for pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.896092  434426 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2k9qh" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:41.058881  434426 pod_ready.go:92] pod "kube-proxy-2k9qh" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:41.058909  434426 pod_ready.go:81] duration metric: took 162.808554ms waiting for pod "kube-proxy-2k9qh" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:41.058923  434426 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:41.456678  434426 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:41.456708  434426 pod_ready.go:81] duration metric: took 397.772439ms waiting for pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:41.456720  434426 pod_ready.go:38] duration metric: took 13.13366456s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:11:41.456741  434426 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:11:41.456792  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:11:41.473663  434426 api_server.go:70] duration metric: took 13.658686712s to wait for apiserver process to appear ...
	I0813 21:11:41.473687  434426 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:11:41.473700  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:11:41.481067  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 200:
	ok
	I0813 21:11:41.482489  434426 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:11:41.482508  434426 api_server.go:129] duration metric: took 8.812243ms to wait for apiserver health ...
	I0813 21:11:41.482518  434426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:11:41.661255  434426 system_pods.go:59] 8 kube-system pods found
	I0813 21:11:41.661293  434426 system_pods.go:61] "coredns-78fcd69978-r4dmk" [0549f087-6804-403a-91ac-46ea3176692a] Running
	I0813 21:11:41.661302  434426 system_pods.go:61] "etcd-no-preload-20210813210044-393438" [ae4561cd-c25c-4ec9-952c-ee3f2bb9da33] Running
	I0813 21:11:41.661309  434426 system_pods.go:61] "kube-apiserver-no-preload-20210813210044-393438" [6634f014-b661-496f-b26e-8883011d941d] Running
	I0813 21:11:41.661316  434426 system_pods.go:61] "kube-controller-manager-no-preload-20210813210044-393438" [8ac7be54-2d76-4cc5-98ae-d920758801e3] Running
	I0813 21:11:41.661322  434426 system_pods.go:61] "kube-proxy-2k9qh" [22a31bb3-8b54-429b-9161-471a84001351] Running
	I0813 21:11:41.661329  434426 system_pods.go:61] "kube-scheduler-no-preload-20210813210044-393438" [2da08426-2d5c-4a28-af34-9e233605bc60] Running
	I0813 21:11:41.661342  434426 system_pods.go:61] "metrics-server-7c784ccb57-7z8h9" [5e8a9f2d-6d0e-49b6-a7ce-a5cc9b3ff075] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:11:41.661351  434426 system_pods.go:61] "storage-provisioner" [7f18b572-6c04-49c7-96fb-5a2371bb3c87] Running
	I0813 21:11:41.661362  434426 system_pods.go:74] duration metric: took 178.836213ms to wait for pod list to return data ...
	I0813 21:11:41.661382  434426 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:11:41.856737  434426 default_sa.go:45] found service account: "default"
	I0813 21:11:41.856816  434426 default_sa.go:55] duration metric: took 195.424882ms for default service account to be created ...
	I0813 21:11:41.856845  434426 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:11:42.058866  434426 system_pods.go:86] 8 kube-system pods found
	I0813 21:11:42.058901  434426 system_pods.go:89] "coredns-78fcd69978-r4dmk" [0549f087-6804-403a-91ac-46ea3176692a] Running
	I0813 21:11:42.058908  434426 system_pods.go:89] "etcd-no-preload-20210813210044-393438" [ae4561cd-c25c-4ec9-952c-ee3f2bb9da33] Running
	I0813 21:11:42.058914  434426 system_pods.go:89] "kube-apiserver-no-preload-20210813210044-393438" [6634f014-b661-496f-b26e-8883011d941d] Running
	I0813 21:11:42.058919  434426 system_pods.go:89] "kube-controller-manager-no-preload-20210813210044-393438" [8ac7be54-2d76-4cc5-98ae-d920758801e3] Running
	I0813 21:11:42.058923  434426 system_pods.go:89] "kube-proxy-2k9qh" [22a31bb3-8b54-429b-9161-471a84001351] Running
	I0813 21:11:42.058927  434426 system_pods.go:89] "kube-scheduler-no-preload-20210813210044-393438" [2da08426-2d5c-4a28-af34-9e233605bc60] Running
	I0813 21:11:42.058935  434426 system_pods.go:89] "metrics-server-7c784ccb57-7z8h9" [5e8a9f2d-6d0e-49b6-a7ce-a5cc9b3ff075] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:11:42.058940  434426 system_pods.go:89] "storage-provisioner" [7f18b572-6c04-49c7-96fb-5a2371bb3c87] Running
	I0813 21:11:42.058948  434426 system_pods.go:126] duration metric: took 202.083479ms to wait for k8s-apps to be running ...
	I0813 21:11:42.058960  434426 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:11:42.059008  434426 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:42.071584  434426 system_svc.go:56] duration metric: took 12.61257ms WaitForService to wait for kubelet.
	I0813 21:11:42.071614  434426 kubeadm.go:547] duration metric: took 14.256642896s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:11:42.071643  434426 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:11:42.255842  434426 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:11:42.255875  434426 node_conditions.go:123] node cpu capacity is 2
	I0813 21:11:42.255891  434426 node_conditions.go:105] duration metric: took 184.242906ms to run NodePressure ...
	I0813 21:11:42.255902  434426 start.go:231] waiting for startup goroutines ...
	I0813 21:11:42.309791  434426 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 21:11:42.311704  434426 out.go:177] 
	W0813 21:11:42.311876  434426 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 21:11:42.313517  434426 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:11:42.315056  434426 out.go:177] * Done! kubectl is now configured to use "no-preload-20210813210044-393438" cluster and "default" namespace by default
	I0813 21:11:38.082394  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:40.584132  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:42.585105  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:41.845488  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:42.344719  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:42.844731  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:43.345378  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:43.845198  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:44.345476  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:44.845084  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:45.345490  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:45.845540  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:46.345470  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:42.487321  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:44.986642  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:46.987014  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:45.082319  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:47.580764  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:46.845406  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:47.345178  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:47.845431  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:48.344795  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:48.845248  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:49.344914  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:49.844893  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:50.345681  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:50.845210  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:51.345589  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:48.994491  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:51.487674  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:49.585484  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:52.081864  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:51.845730  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:52.344956  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:52.845569  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:53.345574  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:53.474660  434236 kubeadm.go:985] duration metric: took 12.717768206s to wait for elevateKubeSystemPrivileges.
	I0813 21:11:53.474717  434236 kubeadm.go:392] StartCluster complete in 6m25.212590888s
	I0813 21:11:53.474741  434236 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:53.474888  434236 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:11:53.476656  434236 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:54.001588  434236 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813210121-393438" rescaled to 1
	I0813 21:11:54.001644  434236 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.163 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:11:54.003211  434236 out.go:177] * Verifying Kubernetes components...
	I0813 21:11:54.003275  434236 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:54.001714  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:11:54.001736  434236 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:11:54.001949  434236 config.go:177] Loaded profile config "default-k8s-different-port-20210813210121-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:11:54.003383  434236 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813210121-393438"
	I0813 21:11:54.003390  434236 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813210121-393438"
	I0813 21:11:54.003391  434236 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813210121-393438"
	I0813 21:11:54.003399  434236 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813210121-393438"
	I0813 21:11:54.003419  434236 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813210121-393438"
	W0813 21:11:54.003431  434236 addons.go:147] addon dashboard should already be in state true
	I0813 21:11:54.003449  434236 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813210121-393438"
	I0813 21:11:54.003462  434236 host.go:66] Checking if "default-k8s-different-port-20210813210121-393438" exists ...
	I0813 21:11:54.003403  434236 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813210121-393438"
	W0813 21:11:54.003498  434236 addons.go:147] addon metrics-server should already be in state true
	I0813 21:11:54.003403  434236 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813210121-393438"
	W0813 21:11:54.003556  434236 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:11:54.003588  434236 host.go:66] Checking if "default-k8s-different-port-20210813210121-393438" exists ...
	I0813 21:11:54.003526  434236 host.go:66] Checking if "default-k8s-different-port-20210813210121-393438" exists ...
	I0813 21:11:54.003908  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.003921  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.003951  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.003958  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.003998  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.004034  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.004150  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.004173  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.016624  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0813 21:11:54.016972  434236 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813210121-393438" to be "Ready" ...
	I0813 21:11:54.017214  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45499
	I0813 21:11:54.017324  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.017555  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.018129  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.018157  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.018281  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.018306  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.018534  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.018603  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.019128  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.019145  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.019180  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.019215  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.024026  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46653
	I0813 21:11:54.024370  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.024930  434236 node_ready.go:49] node "default-k8s-different-port-20210813210121-393438" has status "Ready":"True"
	I0813 21:11:54.024945  434236 node_ready.go:38] duration metric: took 7.95601ms waiting for node "default-k8s-different-port-20210813210121-393438" to be "Ready" ...
	I0813 21:11:54.024955  434236 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:11:54.025186  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.025200  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.025511  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.025673  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetState
	I0813 21:11:54.032739  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42125
	I0813 21:11:54.033219  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.033797  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42675
	I0813 21:11:54.033970  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36905
	I0813 21:11:54.034136  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.034155  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.034289  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.035331  434236 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:54.035994  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.036306  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.036564  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.036582  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetState
	I0813 21:11:54.036588  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.036654  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.036674  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.036955  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.037018  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.037163  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetState
	I0813 21:11:54.037523  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.037557  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.038431  434236 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813210121-393438"
	W0813 21:11:54.038457  434236 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:11:54.038486  434236 host.go:66] Checking if "default-k8s-different-port-20210813210121-393438" exists ...
	I0813 21:11:54.038974  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.039017  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.041363  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:11:54.041425  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:11:54.043687  434236 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:11:54.045078  434236 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:11:54.043789  434236 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:11:54.046493  434236 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:11:54.045176  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:11:54.046598  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:11:54.047176  434236 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:11:54.047197  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:11:54.047217  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:11:54.054752  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.055203  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:11:54.055278  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.055365  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.055555  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:11:54.055868  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:11:54.055896  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.055897  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:11:54.056065  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:11:54.056123  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:11:54.056175  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:11:54.056259  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:11:54.056403  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:11:54.056541  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:11:54.059420  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34781
	I0813 21:11:54.059442  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33309
	I0813 21:11:54.059892  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.059898  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.060344  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.060373  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.060466  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.060485  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.060711  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.060827  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.061004  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetState
	I0813 21:11:54.061201  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.061241  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.064135  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	c802f7156732c       523cad1a4df73       14 seconds ago      Exited              dashboard-metrics-scraper   1                   c9a387bc1fe10
	7a5c8d13e38e3       9a07b5b4bfac0       21 seconds ago      Running             kubernetes-dashboard        0                   81a88eed574b3
	d190ad9281e27       6e38f40d628db       23 seconds ago      Running             storage-provisioner         0                   a51e1b05ddab9
	d91696ad46445       8d147537fb7d1       26 seconds ago      Running             coredns                     0                   3f4a9fcf554b7
	9e3a151de9b04       ea6b13ed84e03       28 seconds ago      Running             kube-proxy                  0                   dd71ffcab16c2
	cf1afa08fe13b       cf9cba6c3e4a8       52 seconds ago      Running             kube-controller-manager     2                   d5e3ceb90e013
	aa4d0f5069490       0048118155842       52 seconds ago      Running             etcd                        2                   2cd725a5ec9f8
	0b6d52d93d8b3       7da2efaa5b480       52 seconds ago      Running             kube-scheduler              2                   9ac1643b6bb6b
	d237a3c155160       b2462aa94d403       53 seconds ago      Running             kube-apiserver              2                   df6cafa1ea4bc
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 21:05:17 UTC, end at Fri 2021-08-13 21:11:56 UTC. --
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.328854670Z" level=info msg="CreateContainer within sandbox \"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.380183215Z" level=info msg="TearDown network for sandbox \"d0ad3ec4867f4f8b9a01c1ad0b60a52f041212eef802ffe403122558d774b29b\" successfully"
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.380450757Z" level=info msg="StopPodSandbox for \"d0ad3ec4867f4f8b9a01c1ad0b60a52f041212eef802ffe403122558d774b29b\" returns successfully"
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.410477125Z" level=info msg="CreateContainer within sandbox \"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\""
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.412481988Z" level=info msg="StartContainer for \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\""
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.851938459Z" level=info msg="StartContainer for \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\" returns successfully"
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.895651192Z" level=info msg="Finish piping stderr of container \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\""
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.895918542Z" level=info msg="Finish piping stdout of container \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\""
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.900425453Z" level=info msg="TaskExit event &TaskExit{ContainerID:12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e,ID:12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e,Pid:6281,ExitStatus:1,ExitedAt:2021-08-13 21:11:41.899550485 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.961287594Z" level=info msg="shim disconnected" id=12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.961432168Z" level=error msg="copy shim log" error="read /proc/self/fd/83: file already closed"
	Aug 13 21:11:42 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:42.540886126Z" level=info msg="CreateContainer within sandbox \"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 13 21:11:42 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:42.581358705Z" level=info msg="CreateContainer within sandbox \"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64\""
	Aug 13 21:11:42 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:42.583303064Z" level=info msg="StartContainer for \"c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64\""
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.056353118Z" level=info msg="StartContainer for \"c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64\" returns successfully"
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.091640109Z" level=info msg="Finish piping stdout of container \"c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64\""
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.091661389Z" level=info msg="Finish piping stderr of container \"c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64\""
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.093154616Z" level=info msg="TaskExit event &TaskExit{ContainerID:c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64,ID:c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64,Pid:6367,ExitStatus:1,ExitedAt:2021-08-13 21:11:43.092699695 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.159473154Z" level=info msg="shim disconnected" id=c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.159714244Z" level=error msg="copy shim log" error="read /proc/self/fd/85: file already closed"
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.528164542Z" level=info msg="RemoveContainer for \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\""
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.544574531Z" level=info msg="RemoveContainer for \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\" returns successfully"
	Aug 13 21:11:45 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:45.192925403Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:11:45 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:45.198217738Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 13 21:11:45 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:45.200289616Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> coredns [d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20210813210044-393438
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20210813210044-393438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=no-preload-20210813210044-393438
	                    minikube.k8s.io/updated_at=2021_08_13T21_11_13_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 21:11:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20210813210044-393438
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 21:11:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 21:11:49 +0000   Fri, 13 Aug 2021 21:11:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 21:11:49 +0000   Fri, 13 Aug 2021 21:11:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 21:11:49 +0000   Fri, 13 Aug 2021 21:11:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 21:11:49 +0000   Fri, 13 Aug 2021 21:11:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.54
	  Hostname:    no-preload-20210813210044-393438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 daf87fe5c2b64cba9f2917b199ed5c40
	  System UUID:                daf87fe5-c2b6-4cba-9f29-17b199ed5c40
	  Boot ID:                    33200d1e-37c6-4466-b969-9244a67b04ce
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-r4dmk                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     31s
	  kube-system                 etcd-no-preload-20210813210044-393438                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         38s
	  kube-system                 kube-apiserver-no-preload-20210813210044-393438             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-controller-manager-no-preload-20210813210044-393438    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kube-proxy-2k9qh                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-scheduler-no-preload-20210813210044-393438             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 metrics-server-7c784ccb57-7z8h9                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (14%!)(MISSING)      0 (0%!)(MISSING)         26s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-kbbhs                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-29b2r                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             470Mi (22%!)(MISSING)  170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 39s   kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s   kubelet  Node no-preload-20210813210044-393438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s   kubelet  Node no-preload-20210813210044-393438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s   kubelet  Node no-preload-20210813210044-393438 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38s   kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                31s   kubelet  Node no-preload-20210813210044-393438 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +3.628504] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.054709] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.047242] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1731 comm=systemd-network
	[  +0.694583] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.344816] vboxguest: loading out-of-tree module taints kernel.
	[  +0.013428] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.818759] systemd-fstab-generator[2003]: Ignoring "noauto" for root device
	[  +0.134410] systemd-fstab-generator[2016]: Ignoring "noauto" for root device
	[  +0.203210] systemd-fstab-generator[2046]: Ignoring "noauto" for root device
	[ +22.779482] systemd-fstab-generator[2257]: Ignoring "noauto" for root device
	[Aug13 21:06] kauditd_printk_skb: 44 callbacks suppressed
	[ +10.282223] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.927510] kauditd_printk_skb: 44 callbacks suppressed
	[ +30.000159] kauditd_printk_skb: 2 callbacks suppressed
	[Aug13 21:07] NFSD: Unable to end grace period: -110
	[Aug13 21:10] kauditd_printk_skb: 20 callbacks suppressed
	[  +4.700521] systemd-fstab-generator[4509]: Ignoring "noauto" for root device
	[Aug13 21:11] systemd-fstab-generator[4900]: Ignoring "noauto" for root device
	[ +14.430798] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.003016] kauditd_printk_skb: 53 callbacks suppressed
	[  +7.269146] kauditd_printk_skb: 44 callbacks suppressed
	[ +13.365798] systemd-fstab-generator[6417]: Ignoring "noauto" for root device
	[  +0.840699] systemd-fstab-generator[6470]: Ignoring "noauto" for root device
	[  +1.066588] systemd-fstab-generator[6523]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e] <==
	* {"level":"info","ts":"2021-08-13T21:11:05.952Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ac82224e2d320a9e","local-member-attributes":"{Name:no-preload-20210813210044-393438 ClientURLs:[https://192.168.61.54:2379]}","request-path":"/0/members/ac82224e2d320a9e/attributes","cluster-id":"f6d71e843b8adcd6","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T21:11:05.952Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:11:05.956Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-13T21:11:05.958Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:11:05.962Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.61.54:2379"}
	{"level":"info","ts":"2021-08-13T21:11:05.966Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:11:05.966Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T21:11:05.967Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T21:11:05.983Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"f6d71e843b8adcd6","local-member-id":"ac82224e2d320a9e","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:11:05.983Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:11:05.985Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2021-08-13T21:11:25.988Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.054728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-no-preload-20210813210044-393438\" ","response":"range_response_count:1 size:4387"}
	{"level":"info","ts":"2021-08-13T21:11:25.988Z","caller":"traceutil/trace.go:171","msg":"trace[1670945783] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-no-preload-20210813210044-393438; range_end:; response_count:1; response_revision:356; }","duration":"101.68014ms","start":"2021-08-13T21:11:25.886Z","end":"2021-08-13T21:11:25.988Z","steps":["trace[1670945783] 'range keys from in-memory index tree'  (duration: 100.624608ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:11:25.988Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"224.476081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/cronjob-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:11:25.990Z","caller":"traceutil/trace.go:171","msg":"trace[1805224945] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/cronjob-controller; range_end:; response_count:0; response_revision:356; }","duration":"225.89622ms","start":"2021-08-13T21:11:25.763Z","end":"2021-08-13T21:11:25.989Z","steps":["trace[1805224945] 'range keys from in-memory index tree'  (duration: 224.162884ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:11:25.988Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"199.33199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:11:25.991Z","caller":"traceutil/trace.go:171","msg":"trace[1794721521] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:356; }","duration":"202.261111ms","start":"2021-08-13T21:11:25.789Z","end":"2021-08-13T21:11:25.991Z","steps":["trace[1794721521] 'range keys from in-memory index tree'  (duration: 199.110207ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:11:26.820Z","caller":"traceutil/trace.go:171","msg":"trace[310762091] linearizableReadLoop","detail":"{readStateIndex:403; appliedIndex:402; }","duration":"154.563751ms","start":"2021-08-13T21:11:26.665Z","end":"2021-08-13T21:11:26.820Z","steps":["trace[310762091] 'read index received'  (duration: 140.61392ms)","trace[310762091] 'applied index is now lower than readState.Index'  (duration: 13.939721ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T21:11:26.821Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"155.547047ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:226"}
	{"level":"info","ts":"2021-08-13T21:11:26.823Z","caller":"traceutil/trace.go:171","msg":"trace[41867722] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:391; }","duration":"157.648992ms","start":"2021-08-13T21:11:26.665Z","end":"2021-08-13T21:11:26.822Z","steps":["trace[41867722] 'agreement among raft nodes before linearized reading'  (duration: 155.235689ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:11:26.824Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.894477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" ","response":"range_response_count:1 size:275"}
	{"level":"info","ts":"2021-08-13T21:11:26.824Z","caller":"traceutil/trace.go:171","msg":"trace[172775019] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:391; }","duration":"104.161227ms","start":"2021-08-13T21:11:26.720Z","end":"2021-08-13T21:11:26.824Z","steps":["trace[172775019] 'agreement among raft nodes before linearized reading'  (duration: 103.843927ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:11:26.825Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"159.320795ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" ","response":"range_response_count:1 size:263"}
	{"level":"info","ts":"2021-08-13T21:11:26.828Z","caller":"traceutil/trace.go:171","msg":"trace[1096478687] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:391; }","duration":"162.230159ms","start":"2021-08-13T21:11:26.665Z","end":"2021-08-13T21:11:26.828Z","steps":["trace[1096478687] 'agreement among raft nodes before linearized reading'  (duration: 159.257666ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:11:26.829Z","caller":"traceutil/trace.go:171","msg":"trace[866144520] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"180.947054ms","start":"2021-08-13T21:11:26.648Z","end":"2021-08-13T21:11:26.829Z","steps":["trace[866144520] 'process raft request'  (duration: 157.763042ms)","trace[866144520] 'compare'  (duration: 13.842093ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  21:11:57 up 6 min,  0 users,  load average: 3.20, 1.31, 0.56
	Linux no-preload-20210813210044-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70] <==
	* I0813 21:11:09.834023       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 21:11:09.835042       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 21:11:09.835623       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0813 21:11:09.837287       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 21:11:09.906507       1 controller.go:611] quota admission added evaluator for: namespaces
	I0813 21:11:10.605681       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:11:10.605892       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 21:11:10.631884       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 21:11:10.649830       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 21:11:10.650997       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:11:11.792418       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:11:11.869795       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 21:11:12.001993       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.61.54]
	I0813 21:11:12.004862       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 21:11:12.011794       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 21:11:12.758778       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:11:13.598782       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:11:13.688361       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:11:19.013033       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:11:26.316569       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 21:11:26.406976       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0813 21:11:33.283869       1 handler_proxy.go:104] no RequestInfo found in the context
	E0813 21:11:33.284386       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:11:33.284412       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075] <==
	* I0813 21:11:31.236495       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0813 21:11:31.451141       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 21:11:31.510992       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:31.551454       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 21:11:31.565458       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:31.609911       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.611047       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:31.611237       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.628036       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:31.650460       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.652169       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.681295       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.682227       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.704434       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.714520       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.722409       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.722715       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.791297       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.791870       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.792129       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.792179       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:31.883205       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-kbbhs"
	I0813 21:11:31.918673       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-29b2r"
	E0813 21:11:56.414385       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:11:56.887641       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313] <==
	* I0813 21:11:29.689520       1 node.go:172] Successfully retrieved node IP: 192.168.61.54
	I0813 21:11:29.689606       1 server_others.go:140] Detected node IP 192.168.61.54
	W0813 21:11:29.689635       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	W0813 21:11:29.817289       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:11:29.817465       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:11:29.817486       1 server_others.go:212] Using iptables Proxier.
	I0813 21:11:29.817939       1 server.go:649] Version: v1.22.0-rc.0
	I0813 21:11:29.828721       1 config.go:315] Starting service config controller
	I0813 21:11:29.828840       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 21:11:29.828868       1 config.go:224] Starting endpoint slice config controller
	I0813 21:11:29.828873       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0813 21:11:29.921936       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210813210044-393438.169af9ff39543d6c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd5e070f95044, ext:973700886, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210813210044-393438", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", N
ame:"no-preload-20210813210044-393438", UID:"no-preload-20210813210044-393438", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210813210044-393438.169af9ff39543d6c" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 21:11:29.929926       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 21:11:29.929977       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f] <==
	* E0813 21:11:09.818571       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:11:09.830591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:11:09.830812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:11:09.831110       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:09.831199       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:11:09.831272       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:11:09.831486       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:11:09.831554       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:10.738132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 21:11:10.758808       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:10.816520       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:11:10.829812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:11:10.886783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:10.889191       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:11:11.001964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:11.036551       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:11:11.075533       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:11:11.088529       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:11:11.099768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:11:11.222275       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:11:11.363918       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:11.383325       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:11:11.394730       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:11:13.440768       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0813 21:11:13.587756       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:05:17 UTC, end at Fri 2021-08-13 21:11:57 UTC. --
	Aug 13 21:11:40 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:40.585772    4909 scope.go:110] "RemoveContainer" containerID="a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a"
	Aug 13 21:11:40 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:40.590146    4909 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found" containerID="a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a"
	Aug 13 21:11:40 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:40.590192    4909 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a} err="failed to get container status \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found"
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:41.192893    4909 remote_runtime.go:276] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found" containerID="a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a"
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:41.193235    4909 kuberuntime_container.go:719] "Container termination failed with gracePeriod" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found" pod="kube-system/coredns-78fcd69978-2kv7b" podUID=ed8c9eb2-76b5-470d-8c30-a80df1e22f27 containerName="coredns" containerID="containerd://a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a" gracePeriod=1
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:41.193280    4909 kuberuntime_container.go:744] "Kill container failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found" pod="kube-system/coredns-78fcd69978-2kv7b" podUID=ed8c9eb2-76b5-470d-8c30-a80df1e22f27 containerName="coredns" containerID={Type:containerd ID:a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a}
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:41.196792    4909 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ed8c9eb2-76b5-470d-8c30-a80df1e22f27 path="/var/lib/kubelet/pods/ed8c9eb2-76b5-470d-8c30-a80df1e22f27/volumes"
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:41.381260    4909 kubelet.go:1767] failed to "KillContainer" for "coredns" with KillContainerError: "rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found"
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:41.381433    4909 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"coredns\" with KillContainerError: \"rpc error: code = NotFound desc = an error occurred when try to find container \\\"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\\\": not found\"" pod="kube-system/coredns-78fcd69978-2kv7b" podUID=ed8c9eb2-76b5-470d-8c30-a80df1e22f27
	Aug 13 21:11:42 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:42.511752    4909 scope.go:110] "RemoveContainer" containerID="12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e"
	Aug 13 21:11:43 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:43.518201    4909 scope.go:110] "RemoveContainer" containerID="12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e"
	Aug 13 21:11:43 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:43.518588    4909 scope.go:110] "RemoveContainer" containerID="c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64"
	Aug 13 21:11:43 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:43.519146    4909 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-kbbhs_kubernetes-dashboard(9eaa843a-02b4-4271-b662-874e5c0d8978)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-kbbhs" podUID=9eaa843a-02b4-4271-b662-874e5c0d8978
	Aug 13 21:11:44 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:44.523154    4909 scope.go:110] "RemoveContainer" containerID="c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64"
	Aug 13 21:11:44 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:44.524650    4909 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-kbbhs_kubernetes-dashboard(9eaa843a-02b4-4271-b662-874e5c0d8978)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-kbbhs" podUID=9eaa843a-02b4-4271-b662-874e5c0d8978
	Aug 13 21:11:45 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:45.201796    4909 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:11:45 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:45.202002    4909 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:11:45 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:45.202877    4909 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8x8p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handle
r{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]V
olumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-7z8h9_kube-system(5e8a9f2d-6d0e-49b6-a7ce-a5cc9b3ff075): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:45 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:45.203243    4909 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-7z8h9" podUID=5e8a9f2d-6d0e-49b6-a7ce-a5cc9b3ff075
	Aug 13 21:11:51 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:51.946339    4909 scope.go:110] "RemoveContainer" containerID="c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64"
	Aug 13 21:11:51 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:51.947042    4909 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-kbbhs_kubernetes-dashboard(9eaa843a-02b4-4271-b662-874e5c0d8978)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-kbbhs" podUID=9eaa843a-02b4-4271-b662-874e5c0d8978
	Aug 13 21:11:53 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:53.564732    4909 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 21:11:53 no-preload-20210813210044-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:11:53 no-preload-20210813210044-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:11:53 no-preload-20210813210044-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83] <==
	* 2021/08/13 21:11:35 Using namespace: kubernetes-dashboard
	2021/08/13 21:11:35 Using in-cluster config to connect to apiserver
	2021/08/13 21:11:35 Using secret token for csrf signing
	2021/08/13 21:11:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:11:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:11:35 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/13 21:11:35 Generating JWE encryption key
	2021/08/13 21:11:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:11:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:11:35 Initializing JWE encryption key from synchronized object
	2021/08/13 21:11:35 Creating in-cluster Sidecar client
	2021/08/13 21:11:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:11:35 Serving insecurely on HTTP port: 9090
	2021/08/13 21:11:35 Starting overwatch
	
	* 
	* ==> storage-provisioner [d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922] <==
	* I0813 21:11:33.244461       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:11:33.295965       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:11:33.296687       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:11:33.316175       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbd3d7cc-a02d-4c39-8593-ff7ef6900f96", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20210813210044-393438_cb1bfdfd-898f-42c7-8ebe-e956d2baf3c5 became leader
	I0813 21:11:33.316707       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:11:33.317571       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20210813210044-393438_cb1bfdfd-898f-42c7-8ebe-e956d2baf3c5!
	I0813 21:11:33.421238       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20210813210044-393438_cb1bfdfd-898f-42c7-8ebe-e956d2baf3c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210813210044-393438 -n no-preload-20210813210044-393438
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210813210044-393438 -n no-preload-20210813210044-393438: exit status 2 (317.756077ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20210813210044-393438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-7z8h9
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20210813210044-393438 describe pod metrics-server-7c784ccb57-7z8h9
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20210813210044-393438 describe pod metrics-server-7c784ccb57-7z8h9: exit status 1 (90.265062ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-7z8h9" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20210813210044-393438 describe pod metrics-server-7c784ccb57-7z8h9: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813210044-393438 -n no-preload-20210813210044-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813210044-393438 -n no-preload-20210813210044-393438: exit status 2 (300.666537ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210813210044-393438 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p no-preload-20210813210044-393438 logs -n 25: (1.560296612s)

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:253: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | force-systemd-flag-20210813205929-393438         | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:29 UTC | Fri, 13 Aug 2021 21:01:13 UTC |
	|         | force-systemd-flag-20210813205929-393438          |                                                  |         |         |                               |                               |
	|         | --memory=2048 --force-systemd                     |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=kvm2              |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	| -p      | force-systemd-flag-20210813205929-393438          | force-systemd-flag-20210813205929-393438         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:13 UTC | Fri, 13 Aug 2021 21:01:13 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                                  |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-flag-20210813205929-393438         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:13 UTC | Fri, 13 Aug 2021 21:01:15 UTC |
	|         | force-systemd-flag-20210813205929-393438          |                                                  |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210813205735-393438         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:38 UTC | Fri, 13 Aug 2021 21:01:20 UTC |
	|         | kubernetes-upgrade-20210813205735-393438          |                                                  |         |         |                               |                               |
	|         | --memory=2200                                     |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2              |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210813205735-393438         | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:20 UTC | Fri, 13 Aug 2021 21:01:21 UTC |
	|         | kubernetes-upgrade-20210813205735-393438          |                                                  |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210813210121-393438      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:21 UTC | Fri, 13 Aug 2021 21:01:21 UTC |
	|         | disable-driver-mounts-20210813210121-393438       |                                                  |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:53 UTC | Fri, 13 Aug 2021 21:02:44 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:53 UTC | Fri, 13 Aug 2021 21:02:54 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:21 UTC | Fri, 13 Aug 2021 21:03:08 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:44 UTC | Fri, 13 Aug 2021 21:03:16 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:17 UTC | Fri, 13 Aug 2021 21:03:18 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:15 UTC | Fri, 13 Aug 2021 21:03:20 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:28 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:29 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:54 UTC | Fri, 13 Aug 2021 21:04:26 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:04:27 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:18 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:28 UTC | Fri, 13 Aug 2021 21:05:01 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:52 UTC | Fri, 13 Aug 2021 21:11:53 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                  | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:56 UTC | Fri, 13 Aug 2021 21:11:57 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:05:02
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:05:02.888018  434502 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:05:02.888239  434502 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:05:02.888250  434502 out.go:311] Setting ErrFile to fd 2...
	I0813 21:05:02.888254  434502 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:05:02.888376  434502 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:05:02.888592  434502 out.go:305] Setting JSON to false
	I0813 21:05:02.924177  434502 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6465,"bootTime":1628882238,"procs":180,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:05:02.924281  434502 start.go:121] virtualization: kvm guest
	I0813 21:05:02.926625  434502 out.go:177] * [embed-certs-20210813210115-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:05:02.928076  434502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:05:02.926775  434502 notify.go:169] Checking for updates...
	I0813 21:05:02.929450  434502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:05:02.930769  434502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:05:02.932110  434502 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:05:02.932613  434502 config.go:177] Loaded profile config "embed-certs-20210813210115-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:05:02.933226  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:05:02.933308  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:05:02.943630  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0813 21:05:02.944019  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:05:02.944692  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:05:02.944721  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:05:02.945088  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:05:02.945271  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:02.945440  434502 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:05:02.945896  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:05:02.945940  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:05:02.957603  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0813 21:05:02.957986  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:05:02.958515  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:05:02.958538  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:05:02.958876  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:05:02.959058  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:02.987833  434502 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 21:05:02.987853  434502 start.go:278] selected driver: kvm2
	I0813 21:05:02.987859  434502 start.go:751] validating driver "kvm2" against &{Name:embed-certs-20210813210115-393438 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.21.3 ClusterName:embed-certs-20210813210115-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true sys
tem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:05:02.987944  434502 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:05:02.988838  434502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.988993  434502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:05:02.998693  434502 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:05:02.999041  434502 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 21:05:02.999069  434502 cni.go:93] Creating CNI manager for ""
	I0813 21:05:02.999079  434502 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:05:02.999087  434502 start_flags.go:277] config:
	{Name:embed-certs-20210813210115-393438 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813210115-393438 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:05:02.999193  434502 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.318771  434426 out.go:177] * Starting control plane node no-preload-20210813210044-393438 in cluster no-preload-20210813210044-393438
	I0813 21:05:02.318800  434426 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:05:02.318965  434426 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/config.json ...
	I0813 21:05:02.319196  434426 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:05:02.319251  434426 start.go:313] acquiring machines lock for no-preload-20210813210044-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:05:02.319528  434426 cache.go:108] acquiring lock: {Name:mkaf60fb03fc48f620204835a8c2e58ac4285be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319558  434426 cache.go:108] acquiring lock: {Name:mke39e3353eb75c75254f6351f63129b8eccdaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319568  434426 cache.go:108] acquiring lock: {Name:mke82dad524ab7543f06ba80a46c31462e90eaf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319562  434426 cache.go:108] acquiring lock: {Name:mk5ae4dca388ede54efe3f0a495fa4d7f638ce4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319651  434426 cache.go:108] acquiring lock: {Name:mk920d9e9f29ba2b1781316e9067fe8a78e86bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319694  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0813 21:05:02.319727  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0813 21:05:02.319731  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0813 21:05:02.319733  434426 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 185.868µs
	I0813 21:05:02.319748  434426 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0813 21:05:02.319711  434426 cache.go:108] acquiring lock: {Name:mkcfd106f227ad483e6a4cbb38d06a5e17fb84c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319645  434426 cache.go:108] acquiring lock: {Name:mk6fe844ec73ef4a411cd1ad882359df79d1727f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319776  434426 cache.go:108] acquiring lock: {Name:mke4c28e30686341fb8f0ce651a18ccb674aa951 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319774  434426 cache.go:108] acquiring lock: {Name:mk87a11a146365014f21d5bffcf66f3437c38926 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319780  434426 cache.go:108] acquiring lock: {Name:mkc72f69bc91cc506098b4e6b602bd9bf210acef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:05:02.319751  434426 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 206.009µs
	I0813 21:05:02.319694  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 21:05:02.319815  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 21:05:02.319831  434426 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 123.7µs
	I0813 21:05:02.319832  434426 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 273.68µs
	I0813 21:05:02.319842  434426 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 21:05:02.319758  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 21:05:02.319846  434426 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 21:05:02.319814  434426 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0813 21:05:02.319742  434426 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 94.501µs
	I0813 21:05:02.319858  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0813 21:05:02.319860  434426 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0813 21:05:02.319861  434426 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 352.713µs
	I0813 21:05:02.319869  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0813 21:05:02.319876  434426 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 21:05:02.319871  434426 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 99.695µs
	I0813 21:05:02.319884  434426 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0813 21:05:02.319868  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0813 21:05:02.319890  434426 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 261.55µs
	I0813 21:05:02.319900  434426 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0813 21:05:02.319906  434426 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0813 21:05:02.319904  434426 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 127.312µs
	I0813 21:05:02.319914  434426 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0813 21:05:02.319914  434426 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 140.766µs
	I0813 21:05:02.319929  434426 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0813 21:05:02.319937  434426 cache.go:88] Successfully saved all images to host disk.
	I0813 21:05:06.432932  434426 start.go:317] acquired machines lock for "no-preload-20210813210044-393438" in 4.113653507s
	I0813 21:05:06.432981  434426 start.go:93] Skipping create...Using existing machine configuration
	I0813 21:05:06.432988  434426 fix.go:55] fixHost starting: 
	I0813 21:05:06.433427  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:05:06.433480  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:05:06.447081  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0813 21:05:06.447544  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:05:06.448090  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:05:06.448115  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:05:06.448448  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:05:06.448643  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:06.448817  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:05:06.451652  434426 fix.go:108] recreateIfNeeded on no-preload-20210813210044-393438: state=Stopped err=<nil>
	I0813 21:05:06.451684  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	W0813 21:05:06.451840  434426 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 21:05:04.382080  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | Getting to WaitForSSH function...
	I0813 21:05:04.387091  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:04.387537  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:04.387577  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:04.387769  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | Using SSH client type: external
	I0813 21:05:04.387816  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa (-rw-------)
	I0813 21:05:04.387876  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:05:04.387899  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | About to run SSH command:
	I0813 21:05:04.387915  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | exit 0
	I0813 21:05:05.526119  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 21:05:05.526446  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetConfigRaw
	I0813 21:05:05.527199  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetIP
	I0813 21:05:05.532260  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.532572  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:05.532606  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.532871  434236 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/config.json ...
	I0813 21:05:05.533085  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:05.533269  434236 machine.go:88] provisioning docker machine ...
	I0813 21:05:05.533290  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:05.533480  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetMachineName
	I0813 21:05:05.533630  434236 buildroot.go:166] provisioning hostname "default-k8s-different-port-20210813210121-393438"
	I0813 21:05:05.533648  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetMachineName
	I0813 21:05:05.533772  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:05.538168  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.538470  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:05.538506  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.538587  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:05.538780  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:05.538935  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:05.539128  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:05.539270  434236 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:05.539473  434236 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0813 21:05:05.539489  434236 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210813210121-393438 && echo "default-k8s-different-port-20210813210121-393438" | sudo tee /etc/hostname
	I0813 21:05:05.673970  434236 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210813210121-393438
	
	I0813 21:05:05.673998  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:05.678812  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.679112  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:05.679156  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.679296  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:05.679467  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:05.679625  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:05.679755  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:05.679951  434236 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:05.680117  434236 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0813 21:05:05.680142  434236 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210813210121-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210813210121-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210813210121-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:05:05.817552  434236 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:05:05.817583  434236 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:05:05.817602  434236 buildroot.go:174] setting up certificates
	I0813 21:05:05.817613  434236 provision.go:83] configureAuth start
	I0813 21:05:05.817630  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetMachineName
	I0813 21:05:05.817886  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetIP
	I0813 21:05:05.823752  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.824099  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:05.824139  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.824244  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:05.829042  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.829362  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:05.829397  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:05.829488  434236 provision.go:138] copyHostCerts
	I0813 21:05:05.829565  434236 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:05:05.829579  434236 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:05:05.829627  434236 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 21:05:05.829738  434236 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:05:05.829747  434236 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:05:05.829771  434236 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:05:05.829833  434236 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:05:05.829845  434236 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:05:05.829859  434236 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:05:05.829910  434236 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210813210121-393438 san=[192.168.39.163 192.168.39.163 localhost 127.0.0.1 minikube default-k8s-different-port-20210813210121-393438]
	I0813 21:05:06.014962  434236 provision.go:172] copyRemoteCerts
	I0813 21:05:06.015027  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:05:06.015075  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:06.020059  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.020442  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.020483  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.020603  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:06.020788  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.020948  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:06.021074  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:05:06.114984  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:05:06.132635  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1314 bytes)
	I0813 21:05:06.149756  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 21:05:06.167156  434236 provision.go:86] duration metric: configureAuth took 349.531077ms
	I0813 21:05:06.167173  434236 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:05:06.167307  434236 config.go:177] Loaded profile config "default-k8s-different-port-20210813210121-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:05:06.167319  434236 machine.go:91] provisioned docker machine in 634.036261ms
	I0813 21:05:06.167325  434236 start.go:267] post-start starting for "default-k8s-different-port-20210813210121-393438" (driver="kvm2")
	I0813 21:05:06.167331  434236 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:05:06.167350  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.167606  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:05:06.167647  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:06.172554  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.172895  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.172927  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.173085  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:06.173238  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.173380  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:06.173532  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:05:06.266295  434236 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:05:06.271058  434236 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:05:06.271077  434236 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:05:06.271129  434236 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:05:06.271224  434236 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 21:05:06.271343  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:05:06.278509  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:06.295512  434236 start.go:270] post-start completed in 128.173359ms
	I0813 21:05:06.295544  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.295768  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:06.300966  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.301296  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.301340  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.301419  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:06.301649  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.301848  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.302008  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:06.302196  434236 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:06.302368  434236 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0813 21:05:06.302380  434236 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:05:06.432728  434236 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888706.367503924
	
	I0813 21:05:06.432751  434236 fix.go:212] guest clock: 1628888706.367503924
	I0813 21:05:06.432761  434236 fix.go:225] Guest: 2021-08-13 21:05:06.367503924 +0000 UTC Remote: 2021-08-13 21:05:06.295750146 +0000 UTC m=+14.752103869 (delta=71.753778ms)
	I0813 21:05:06.432783  434236 fix.go:196] guest clock delta is within tolerance: 71.753778ms
	I0813 21:05:06.432791  434236 fix.go:57] fixHost completed within 14.715695122s
	I0813 21:05:06.432798  434236 start.go:80] releasing machines lock for "default-k8s-different-port-20210813210121-393438", held for 14.715723811s
	I0813 21:05:06.432880  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.433135  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetIP
	I0813 21:05:06.438544  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.438953  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.438989  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.439107  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.439256  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.439723  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:05:06.439924  434236 ssh_runner.go:149] Run: systemctl --version
	I0813 21:05:06.439947  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:06.439996  434236 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:05:06.440042  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:05:06.446606  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.446709  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.447016  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.447068  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.447100  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:06.447105  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:06.447118  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:06.447316  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:05:06.447333  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.447509  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:06.447522  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:05:06.447659  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:05:06.447675  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:05:06.447813  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:05:02.137378  434036 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:05:02.137399  434036 containerd.go:517] Images already preloaded, skipping extraction
	I0813 21:05:02.137443  434036 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:02.178833  434036 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:05:02.178858  434036 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:05:02.178905  434036 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:05:02.219726  434036 cni.go:93] Creating CNI manager for ""
	I0813 21:05:02.219760  434036 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:05:02.219773  434036 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:05:02.219787  434036 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.180 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210813205952-393438 NodeName:old-k8s-version-20210813205952-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.83.180 CgroupDrive
r:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:05:02.219952  434036 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20210813205952-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210813205952-393438
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.180:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:05:02.220071  434036 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20210813205952-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.180 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210813205952-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:05:02.220131  434036 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0813 21:05:02.229626  434036 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:05:02.229694  434036 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:05:02.239705  434036 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (625 bytes)
	I0813 21:05:02.254486  434036 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:05:02.269654  434036 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0813 21:05:02.284962  434036 ssh_runner.go:149] Run: grep 192.168.83.180	control-plane.minikube.internal$ /etc/hosts
	I0813 21:05:02.290202  434036 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:05:02.302287  434036 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438 for IP: 192.168.83.180
	I0813 21:05:02.302334  434036 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:05:02.302352  434036 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:05:02.302412  434036 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.key
	I0813 21:05:02.302437  434036 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/apiserver.key.c79f34d7
	I0813 21:05:02.302462  434036 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/proxy-client.key
	I0813 21:05:02.302586  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:05:02.302634  434036 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:05:02.302645  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:05:02.302692  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:05:02.302756  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:05:02.302792  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:05:02.302845  434036 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:02.304152  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:05:02.322598  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 21:05:02.339465  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:05:02.358152  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:05:02.378188  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:05:02.397511  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:05:02.415437  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:05:02.434086  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:05:02.452673  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:05:02.469777  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:05:02.485698  434036 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:05:02.502082  434036 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:05:02.514329  434036 ssh_runner.go:149] Run: openssl version
	I0813 21:05:02.519755  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:05:02.527526  434036 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:05:02.532127  434036 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:05:02.532172  434036 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:05:02.537982  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:05:02.547421  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:05:02.557122  434036 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:02.561783  434036 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:02.561825  434036 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:02.568483  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:05:02.578597  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:05:02.586199  434036 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:05:02.591637  434036 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:05:02.591674  434036 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:05:02.597570  434036 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:05:02.605363  434036 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210813205952-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.14.0 ClusterName:old-k8s-version-20210813205952-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.180 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:
true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:05:02.605459  434036 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:05:02.605491  434036 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:02.637033  434036 cri.go:76] found id: ""
	I0813 21:05:02.637074  434036 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:05:02.644103  434036 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:05:02.644121  434036 kubeadm.go:600] restartCluster start
	I0813 21:05:02.644161  434036 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:05:02.651608  434036 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:02.652905  434036 kubeconfig.go:117] verify returned: extract IP: "old-k8s-version-20210813205952-393438" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:05:02.653446  434036 kubeconfig.go:128] "old-k8s-version-20210813205952-393438" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:05:02.654463  434036 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:05:02.657561  434036 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:05:02.665300  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:02.665341  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:02.677034  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:02.877383  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:02.877444  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:02.889500  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:03.077691  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:03.077756  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:03.087473  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:03.277813  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:03.277909  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:03.288350  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:03.477578  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:03.477685  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:03.491136  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:03.677355  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:03.677428  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:03.689066  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:03.877274  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:03.877375  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:03.887744  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:04.078043  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:04.078134  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:04.088710  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:04.278031  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:04.278126  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:04.288175  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:04.477425  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:04.477506  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:04.487223  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:04.677604  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:04.677673  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:04.687327  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:04.877648  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:04.877749  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:04.887947  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.077168  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:05.077259  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:05.086969  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.277273  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:05.277351  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:05.286776  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.478052  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:05.478136  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:05.487219  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.677543  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:05.677629  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:05.686541  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.686554  434036 api_server.go:164] Checking apiserver status ...
	I0813 21:05:05.686585  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:05.694779  434036 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:05.694799  434036 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:05:05.694807  434036 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:05:05.694820  434036 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:05:05.694872  434036 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:05.724669  434036 cri.go:76] found id: ""
	I0813 21:05:05.724723  434036 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:05:05.738929  434036 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:05:05.746855  434036 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:05:05.746901  434036 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:05.753188  434036 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:05.753208  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:06.188862  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:06.941431  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:06.454088  434426 out.go:177] * Restarting existing kvm2 VM for "no-preload-20210813210044-393438" ...
	I0813 21:05:06.454119  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Start
	I0813 21:05:06.454272  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Ensuring networks are active...
	I0813 21:05:06.456218  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Ensuring network default is active
	I0813 21:05:06.456553  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Ensuring network mk-no-preload-20210813210044-393438 is active
	I0813 21:05:06.456952  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Getting domain xml...
	I0813 21:05:06.458950  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Creating domain...
	I0813 21:05:06.853802  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Waiting to get IP...
	I0813 21:05:06.854814  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:06.855297  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Found IP for machine: 192.168.61.54
	I0813 21:05:06.855335  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has current primary IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:06.855349  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Reserving static IP address...
	I0813 21:05:06.855696  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "no-preload-20210813210044-393438", mac: "52:54:00:e4:61:bf", ip: "192.168.61.54"} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:01:05 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:06.855734  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Reserved static IP address: 192.168.61.54
	I0813 21:05:06.855762  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | skip adding static IP to network mk-no-preload-20210813210044-393438 - found existing host DHCP lease matching {name: "no-preload-20210813210044-393438", mac: "52:54:00:e4:61:bf", ip: "192.168.61.54"}
	I0813 21:05:06.855795  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Getting to WaitForSSH function...
	I0813 21:05:06.855812  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Waiting for SSH to be available...
	I0813 21:05:06.860576  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:06.861034  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:01:05 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:06.861084  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:06.861251  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Using SSH client type: external
	I0813 21:05:06.861281  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa (-rw-------)
	I0813 21:05:06.861320  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:05:06.861340  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | About to run SSH command:
	I0813 21:05:06.861354  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | exit 0
	I0813 21:05:03.000913  434502 out.go:177] * Starting control plane node embed-certs-20210813210115-393438 in cluster embed-certs-20210813210115-393438
	I0813 21:05:03.000930  434502 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:05:03.000962  434502 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 21:05:03.000974  434502 cache.go:56] Caching tarball of preloaded images
	I0813 21:05:03.001073  434502 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 21:05:03.001089  434502 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 21:05:03.001201  434502 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/config.json ...
	I0813 21:05:03.001377  434502 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:05:03.001402  434502 start.go:313] acquiring machines lock for embed-certs-20210813210115-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:05:06.544163  434236 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:05:06.546237  434236 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:10.583772  434236 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.037505629s)
	I0813 21:05:10.583933  434236 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 21:05:10.584017  434236 ssh_runner.go:149] Run: which lz4
	I0813 21:05:10.589245  434236 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:05:10.593996  434236 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:05:10.594022  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0813 21:05:07.129333  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:07.186110  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:07.235916  434036 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:05:07.235998  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:07.748101  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:08.248184  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:08.748204  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:09.247459  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:09.747390  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:10.247498  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:10.748061  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:11.247289  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:11.747470  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:14.430768  434236 containerd.go:546] Took 3.841559 seconds to copy over tarball
	I0813 21:05:14.430846  434236 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:05:12.247435  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:12.747294  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:13.247274  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:13.747881  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:14.248154  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:14.747920  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:15.247841  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:15.747949  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:16.247615  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:16.747300  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:20.829404  434502 start.go:317] acquired machines lock for "embed-certs-20210813210115-393438" in 17.827980029s
	I0813 21:05:20.829452  434502 start.go:93] Skipping create...Using existing machine configuration
	I0813 21:05:20.829459  434502 fix.go:55] fixHost starting: 
	I0813 21:05:20.829940  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:05:20.830000  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:05:20.844289  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39205
	I0813 21:05:20.844710  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:05:20.845255  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:05:20.845286  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:05:20.845702  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:05:20.845894  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:20.846063  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetState
	I0813 21:05:20.849256  434502 fix.go:108] recreateIfNeeded on embed-certs-20210813210115-393438: state=Stopped err=<nil>
	I0813 21:05:20.849289  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	W0813 21:05:20.849461  434502 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 21:05:17.247659  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:17.747678  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:18.247917  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:18.747391  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:19.247353  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:19.748079  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:20.248063  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:20.747264  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:21.248202  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:21.747268  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:20.014334  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 21:05:20.014703  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetConfigRaw
	I0813 21:05:20.015453  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetIP
	I0813 21:05:20.020523  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.020887  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.020915  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.021216  434426 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/config.json ...
	I0813 21:05:20.021379  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.021570  434426 machine.go:88] provisioning docker machine ...
	I0813 21:05:20.021591  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.021761  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetMachineName
	I0813 21:05:20.021875  434426 buildroot.go:166] provisioning hostname "no-preload-20210813210044-393438"
	I0813 21:05:20.021895  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetMachineName
	I0813 21:05:20.022022  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.026141  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.026506  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.026562  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.026646  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.026821  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.026940  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.027078  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.027227  434426 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:20.027464  434426 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0813 21:05:20.027484  434426 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210813210044-393438 && echo "no-preload-20210813210044-393438" | sudo tee /etc/hostname
	I0813 21:05:20.147229  434426 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210813210044-393438
	
	I0813 21:05:20.147257  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.152107  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.152466  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.152497  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.152625  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.152801  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.152950  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.153071  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.153211  434426 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:20.153354  434426 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0813 21:05:20.153384  434426 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210813210044-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210813210044-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210813210044-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:05:20.272763  434426 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:05:20.272797  434426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:05:20.272825  434426 buildroot.go:174] setting up certificates
	I0813 21:05:20.272839  434426 provision.go:83] configureAuth start
	I0813 21:05:20.272855  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetMachineName
	I0813 21:05:20.273158  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetIP
	I0813 21:05:20.279037  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.279386  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.279412  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.279522  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.283673  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.284008  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.284038  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.284148  434426 provision.go:138] copyHostCerts
	I0813 21:05:20.284212  434426 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:05:20.284240  434426 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:05:20.284281  434426 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:05:20.284383  434426 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:05:20.284395  434426 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:05:20.284419  434426 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:05:20.284490  434426 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:05:20.284499  434426 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:05:20.284529  434426 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 21:05:20.284602  434426 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210813210044-393438 san=[192.168.61.54 192.168.61.54 localhost 127.0.0.1 minikube no-preload-20210813210044-393438]
	I0813 21:05:20.460696  434426 provision.go:172] copyRemoteCerts
	I0813 21:05:20.460754  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:05:20.460784  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.465578  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.465824  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.465849  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.466020  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.466187  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.466318  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.466426  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:05:20.549980  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:05:20.566659  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 21:05:20.582025  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:05:20.597459  434426 provision.go:86] duration metric: configureAuth took 324.607454ms
	I0813 21:05:20.597480  434426 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:05:20.597630  434426 config.go:177] Loaded profile config "no-preload-20210813210044-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:05:20.597641  434426 machine.go:91] provisioned docker machine in 576.056768ms
	I0813 21:05:20.597648  434426 start.go:267] post-start starting for "no-preload-20210813210044-393438" (driver="kvm2")
	I0813 21:05:20.597654  434426 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:05:20.597676  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.598063  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:05:20.598109  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.603075  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.603418  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.603449  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.603543  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.603679  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.603801  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.603939  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:05:20.685964  434426 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:05:20.690151  434426 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:05:20.690173  434426 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:05:20.690229  434426 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:05:20.690331  434426 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 21:05:20.690448  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:05:20.697161  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:20.712227  434426 start.go:270] post-start completed in 114.567301ms
	I0813 21:05:20.712262  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.712551  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.717402  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.717730  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.717763  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.717833  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.718006  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.718113  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.718254  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.718395  434426 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:20.718528  434426 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0813 21:05:20.718538  434426 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:05:20.829132  434426 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888720.727275171
	
	I0813 21:05:20.829162  434426 fix.go:212] guest clock: 1628888720.727275171
	I0813 21:05:20.829172  434426 fix.go:225] Guest: 2021-08-13 21:05:20.727275171 +0000 UTC Remote: 2021-08-13 21:05:20.71253417 +0000 UTC m=+18.577992282 (delta=14.741001ms)
	I0813 21:05:20.829200  434426 fix.go:196] guest clock delta is within tolerance: 14.741001ms
	I0813 21:05:20.829208  434426 fix.go:57] fixHost completed within 14.396219786s
	I0813 21:05:20.829214  434426 start.go:80] releasing machines lock for "no-preload-20210813210044-393438", held for 14.39625577s
	I0813 21:05:20.829267  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.829537  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetIP
	I0813 21:05:20.835175  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.835525  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.835557  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.835718  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.835862  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.836588  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:05:20.836852  434426 ssh_runner.go:149] Run: systemctl --version
	I0813 21:05:20.836880  434426 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:05:20.836883  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.836925  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:05:20.842008  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.842283  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.842316  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.842429  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.842589  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.842829  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.843009  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:05:20.843166  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.843479  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:20.843508  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:20.843740  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:05:20.843914  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:05:20.844086  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:05:20.844212  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:05:20.925525  434426 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:05:20.925625  434426 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 21:05:20.961756  434426 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 21:05:20.972595  434426 docker.go:153] disabling docker service ...
	I0813 21:05:20.972647  434426 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:05:20.984043  434426 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:05:20.997048  434426 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:05:21.143159  434426 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:05:21.280324  434426 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:05:21.292461  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:05:21.309628  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 21:05:21.326009  434426 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:05:21.333161  434426 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:05:21.333225  434426 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:05:21.347027  434426 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:05:21.353239  434426 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:05:21.490905  434426 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:05:21.539895  434426 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 21:05:21.539979  434426 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:05:21.545304  434426 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 21:05:25.480460  434236 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.049581044s)
	I0813 21:05:25.480492  434236 containerd.go:553] Took 11.049688 seconds t extract the tarball
	I0813 21:05:25.480507  434236 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:05:25.541290  434236 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:05:25.688551  434236 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:05:25.730844  434236 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 21:05:25.772570  434236 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 21:05:25.789555  434236 docker.go:153] disabling docker service ...
	I0813 21:05:25.789606  434236 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:05:25.809341  434236 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:05:25.819728  434236 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:05:25.989268  434236 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:05:26.146250  434236 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:05:26.161385  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:05:26.178481  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 21:05:26.203180  434236 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:05:26.212763  434236 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:05:26.212829  434236 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:05:26.234818  434236 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:05:26.242186  434236 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:05:26.380655  434236 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:05:26.412767  434236 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 21:05:26.412855  434236 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:05:26.422428  434236 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 21:05:22.247279  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:24.748176  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:25.247258  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:25.747908  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:26.247535  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:26.747227  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:22.650397  434426 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:05:24.541137  434426 start.go:413] Will wait 60s for crictl version
	I0813 21:05:24.541205  434426 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:05:24.597599  434426 retry.go:31] will retry after 14.405090881s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T21:05:24Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 21:05:27.527828  434236 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:05:27.534223  434236 start.go:413] Will wait 60s for crictl version
	I0813 21:05:27.534302  434236 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:05:27.569887  434236 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 21:05:27.569975  434236 ssh_runner.go:149] Run: containerd --version
	I0813 21:05:27.602589  434236 ssh_runner.go:149] Run: containerd --version
	I0813 21:05:24.120446  434502 out.go:177] * Restarting existing kvm2 VM for "embed-certs-20210813210115-393438" ...
	I0813 21:05:24.527111  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .Start
	I0813 21:05:24.527410  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Ensuring networks are active...
	I0813 21:05:24.530247  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Ensuring network default is active
	I0813 21:05:24.530813  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Ensuring network mk-embed-certs-20210813210115-393438 is active
	I0813 21:05:24.531296  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Getting domain xml...
	I0813 21:05:24.533230  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Creating domain...
	I0813 21:05:25.823170  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Waiting to get IP...
	I0813 21:05:25.824727  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:25.825288  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Found IP for machine: 192.168.72.95
	I0813 21:05:25.825326  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has current primary IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:25.825338  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Reserving static IP address...
	I0813 21:05:25.825778  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "embed-certs-20210813210115-393438", mac: "52:54:00:f7:8f:97", ip: "192.168.72.95"} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:25.825808  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Reserved static IP address: 192.168.72.95
	I0813 21:05:25.825840  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | skip adding static IP to network mk-embed-certs-20210813210115-393438 - found existing host DHCP lease matching {name: "embed-certs-20210813210115-393438", mac: "52:54:00:f7:8f:97", ip: "192.168.72.95"}
	I0813 21:05:25.825864  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Getting to WaitForSSH function...
	I0813 21:05:25.825878  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Waiting for SSH to be available...
	I0813 21:05:25.831617  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:25.832074  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:25.832097  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:25.832291  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH client type: external
	I0813 21:05:25.832319  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa (-rw-------)
	I0813 21:05:25.832358  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:05:25.832368  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | About to run SSH command:
	I0813 21:05:25.832379  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | exit 0
	I0813 21:05:27.634975  434236 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 21:05:27.635046  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetIP
	I0813 21:05:27.640718  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:27.641057  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:05:27.641096  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:05:27.641226  434236 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 21:05:27.645735  434236 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:05:27.656156  434236 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:05:27.656227  434236 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:27.690217  434236 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:05:27.690235  434236 containerd.go:517] Images already preloaded, skipping extraction
	I0813 21:05:27.690276  434236 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:27.719945  434236 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:05:27.719970  434236 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:05:27.720022  434236 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:05:27.757584  434236 cni.go:93] Creating CNI manager for ""
	I0813 21:05:27.757608  434236 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:05:27.757619  434236 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:05:27.757635  434236 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210813210121-393438 NodeName:default-k8s-different-port-20210813210121-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.
168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:05:27.757797  434236 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210813210121-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:05:27.757917  434236 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210813210121-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.163 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210121-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0813 21:05:27.757983  434236 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 21:05:27.767025  434236 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:05:27.767092  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:05:27.776126  434236 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (564 bytes)
	I0813 21:05:27.790927  434236 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:05:27.803474  434236 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0813 21:05:27.816691  434236 ssh_runner.go:149] Run: grep 192.168.39.163	control-plane.minikube.internal$ /etc/hosts
	I0813 21:05:27.820882  434236 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:05:27.833898  434236 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438 for IP: 192.168.39.163
	I0813 21:05:27.833958  434236 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:05:27.833985  434236 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:05:27.834066  434236 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.key
	I0813 21:05:27.834099  434236 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/apiserver.key.a64e5ae8
	I0813 21:05:27.834123  434236 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/proxy-client.key
	I0813 21:05:27.834281  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:05:27.834389  434236 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:05:27.834408  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:05:27.834446  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:05:27.834483  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:05:27.834513  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:05:27.834565  434236 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:27.835926  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:05:27.868872  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:05:27.898347  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:05:27.927284  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:05:27.955412  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:05:27.982208  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:05:28.008804  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:05:28.037793  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:05:28.062318  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:05:28.090589  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:05:28.113355  434236 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:05:28.139026  434236 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:05:28.159732  434236 ssh_runner.go:149] Run: openssl version
	I0813 21:05:28.167042  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:05:28.179222  434236 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:05:28.187603  434236 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:05:28.187662  434236 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:05:28.197216  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:05:28.205497  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:05:28.214015  434236 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:05:28.219020  434236 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:05:28.219071  434236 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:05:28.225521  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:05:28.233782  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:05:28.241654  434236 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:28.247002  434236 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:28.247039  434236 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:28.253102  434236 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:05:28.262141  434236 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210813210121-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210813210121-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.163 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_
ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:05:28.262269  434236 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:05:28.262329  434236 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:28.299554  434236 cri.go:76] found id: ""
	I0813 21:05:28.299631  434236 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:05:28.311680  434236 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:05:28.311708  434236 kubeadm.go:600] restartCluster start
	I0813 21:05:28.311755  434236 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:05:28.323986  434236 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:28.324997  434236 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210813210121-393438" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:05:28.325273  434236 kubeconfig.go:128] "default-k8s-different-port-20210813210121-393438" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:05:28.325795  434236 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:05:28.330761  434236 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:05:28.340580  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:28.340634  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:28.355687  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:28.556114  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:28.556194  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:28.570589  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:28.755828  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:28.755920  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:28.768185  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:28.956502  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:28.956587  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:28.972504  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:29.156801  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:29.156900  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:29.170261  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:29.356543  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:29.356620  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:29.371862  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:29.556146  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:29.556222  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:29.568431  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:29.756624  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:29.756716  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:29.772102  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:29.956366  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:29.956434  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:29.972817  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:30.156118  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:30.156213  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:30.173007  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:30.356290  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:30.356390  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:30.370997  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:30.556270  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:30.556363  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:30.569087  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:30.756426  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:30.756499  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:30.766215  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:30.956579  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:30.956667  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:30.966316  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:31.156673  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:31.156748  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:31.166867  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:31.355975  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:31.356076  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:31.367186  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:31.367210  434236 api_server.go:164] Checking apiserver status ...
	I0813 21:05:31.367256  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:31.377313  434236 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:31.377340  434236 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:05:31.377351  434236 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:05:31.377369  434236 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:05:31.377436  434236 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:31.420871  434236 cri.go:76] found id: ""
	I0813 21:05:31.420936  434236 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:05:31.437839  434236 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:05:31.447696  434236 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:05:31.447749  434236 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:31.456183  434236 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:31.456200  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:27.248144  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:27.747426  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:28.247901  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:28.747856  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:29.248076  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:29.747972  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:30.248094  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:30.747545  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:31.247708  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:31.747454  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:31.730142  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:32.610832  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:32.904123  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:33.089553  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:33.245454  434236 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:05:33.245536  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:33.762559  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:34.263157  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:34.762910  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:35.263052  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:35.762700  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:36.262410  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:32.247698  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:32.747521  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:33.247272  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:33.748045  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:34.247556  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:34.747910  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:35.247445  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:35.747731  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:36.247209  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:36.747718  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:34.915344  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:05:34.915378  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:05:34.915389  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | command : exit 0
	I0813 21:05:34.915403  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | err     : exit status 255
	I0813 21:05:34.915415  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | output  : 
	I0813 21:05:39.004545  434426 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:05:39.044544  434426 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 21:05:39.044628  434426 ssh_runner.go:149] Run: containerd --version
	I0813 21:05:39.121568  434426 ssh_runner.go:149] Run: containerd --version
	I0813 21:05:36.763121  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:37.262994  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:37.762859  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:38.263086  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:38.763215  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:39.262402  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:39.762448  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:40.262367  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:40.762311  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:41.262754  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:37.247466  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:37.747360  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:38.248246  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:38.748104  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:39.247303  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:39.748087  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:40.247393  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:40.747662  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:41.247902  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:41.748222  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:39.186148  434426 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0813 21:05:39.186217  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetIP
	I0813 21:05:39.191526  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:39.191849  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:05:39.191873  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:05:39.192100  434426 ssh_runner.go:149] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0813 21:05:39.196656  434426 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:05:39.209929  434426 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:05:39.209982  434426 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:39.252981  434426 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:05:39.253001  434426 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:05:39.253043  434426 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:05:39.289557  434426 cni.go:93] Creating CNI manager for ""
	I0813 21:05:39.289587  434426 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:05:39.289599  434426 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:05:39.289616  434426 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.54 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210813210044-393438 NodeName:no-preload-20210813210044-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.61.54 CgroupDriver:cgroup
fs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:05:39.289783  434426 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20210813210044-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:05:39.289902  434426 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20210813210044-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.54 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210813210044-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:05:39.289962  434426 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:05:39.301767  434426 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:05:39.301858  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:05:39.312854  434426 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (552 bytes)
	I0813 21:05:39.336182  434426 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:05:39.354752  434426 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes)
	I0813 21:05:39.371503  434426 ssh_runner.go:149] Run: grep 192.168.61.54	control-plane.minikube.internal$ /etc/hosts
	I0813 21:05:39.375918  434426 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:05:39.387685  434426 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438 for IP: 192.168.61.54
	I0813 21:05:39.387740  434426 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:05:39.387764  434426 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:05:39.387864  434426 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.key
	I0813 21:05:39.387889  434426 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/apiserver.key.f8b022bd
	I0813 21:05:39.387919  434426 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/proxy-client.key
	I0813 21:05:39.388040  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:05:39.388086  434426 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:05:39.388102  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:05:39.388136  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:05:39.388188  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:05:39.388249  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:05:39.388315  434426 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:39.389647  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:05:39.417065  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:05:39.450225  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:05:39.473274  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:05:39.496042  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:05:39.518685  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:05:39.541018  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:05:39.566796  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:05:39.596134  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:05:39.617569  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:05:39.639014  434426 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:05:39.667571  434426 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:05:39.681262  434426 ssh_runner.go:149] Run: openssl version
	I0813 21:05:39.687737  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:05:39.695800  434426 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:39.700510  434426 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:39.700555  434426 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:05:39.708604  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:05:39.718381  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:05:39.728624  434426 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:05:39.734280  434426 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:05:39.734329  434426 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:05:39.742886  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:05:39.754769  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:05:39.769034  434426 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:05:39.776798  434426 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:05:39.776849  434426 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:05:39.785325  434426 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:05:39.793427  434426 kubeadm.go:390] StartCluster: {Name:no-preload-20210813210044-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22
.0-rc.0 ClusterName:no-preload-20210813210044-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.54 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:
true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:05:39.793515  434426 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:05:39.793567  434426 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:39.841767  434426 cri.go:76] found id: "a5ca654816273571ad39ae304722652989f2a69e9ccd0256ccf23f4cbc244abd"
	I0813 21:05:39.841797  434426 cri.go:76] found id: "0d285b2e29499c2e1d9b734b49c97a04b18540b7360ed9e34e8acfd407100d67"
	I0813 21:05:39.841805  434426 cri.go:76] found id: "cf6143a55b051d9efc422092ace8c862445c4967a18ee739bf39cfad5460448e"
	I0813 21:05:39.841810  434426 cri.go:76] found id: "1a65e64cbc9f06e6ddf3d6194452927f859afa1b62ed7d907245763f06fec645"
	I0813 21:05:39.841815  434426 cri.go:76] found id: "f781a92e61ada43905b902c2ac9fca7404b8495aee2af7d7795afb32857f23e4"
	I0813 21:05:39.841825  434426 cri.go:76] found id: "1cf854ac4e58590f5719949ac3de604a490ab8ae41cc5dfec30aaee4cfa86aa1"
	I0813 21:05:39.841840  434426 cri.go:76] found id: "f25d2b1892e38d48bee5b2f604058fa84fc6504d779b29320f01da144a8d3402"
	I0813 21:05:39.841848  434426 cri.go:76] found id: ""
	I0813 21:05:39.841897  434426 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:05:39.874659  434426 cri.go:103] JSON = null
	W0813 21:05:39.874731  434426 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0813 21:05:39.874793  434426 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:05:39.883098  434426 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:05:39.883133  434426 kubeadm.go:600] restartCluster start
	I0813 21:05:39.883183  434426 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:05:39.891774  434426 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:39.892947  434426 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210813210044-393438" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:05:39.893403  434426 kubeconfig.go:128] "no-preload-20210813210044-393438" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:05:39.894254  434426 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:05:39.897519  434426 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:05:39.904898  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:39.904947  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:39.914791  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:40.115222  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:40.115313  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:40.130011  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:40.315354  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:40.315437  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:40.328987  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:40.515338  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:40.515430  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:40.526348  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:40.715644  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:40.715729  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:40.727876  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:40.915221  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:40.915304  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:40.925675  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:41.115841  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:41.115917  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:41.129609  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:41.315884  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:41.315953  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:41.331198  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:41.514920  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:41.515010  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:41.531118  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:41.715384  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:41.715481  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:41.732231  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:41.915489  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:41.915579  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:41.928849  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.115137  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.115246  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.129931  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:37.915942  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Getting to WaitForSSH function...
	I0813 21:05:37.920675  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:37.921020  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:01:32 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:37.921054  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:37.921190  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH client type: external
	I0813 21:05:37.921223  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa (-rw-------)
	I0813 21:05:37.921271  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:05:37.921289  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | About to run SSH command:
	I0813 21:05:37.921302  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | exit 0
	I0813 21:05:41.762446  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:42.263222  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:42.762296  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:43.262498  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:43.762232  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:44.262236  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:44.762300  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.262217  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.762238  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:46.262311  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:42.248257  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:42.747441  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:43.247903  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:43.747303  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:44.248179  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:44.747586  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.248077  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.748033  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:46.247697  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:46.747426  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:42.315098  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.315169  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.326811  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.515062  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.515137  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.525584  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.715863  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.715951  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.729056  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.915375  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.915483  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.926975  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.926999  434426 api_server.go:164] Checking apiserver status ...
	I0813 21:05:42.927046  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:05:42.937063  434426 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:05:42.937085  434426 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:05:42.937092  434426 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:05:42.937109  434426 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:05:42.937157  434426 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:05:42.972880  434426 cri.go:76] found id: "a5ca654816273571ad39ae304722652989f2a69e9ccd0256ccf23f4cbc244abd"
	I0813 21:05:42.972899  434426 cri.go:76] found id: "0d285b2e29499c2e1d9b734b49c97a04b18540b7360ed9e34e8acfd407100d67"
	I0813 21:05:42.972908  434426 cri.go:76] found id: "cf6143a55b051d9efc422092ace8c862445c4967a18ee739bf39cfad5460448e"
	I0813 21:05:42.972912  434426 cri.go:76] found id: "1a65e64cbc9f06e6ddf3d6194452927f859afa1b62ed7d907245763f06fec645"
	I0813 21:05:42.972917  434426 cri.go:76] found id: "f781a92e61ada43905b902c2ac9fca7404b8495aee2af7d7795afb32857f23e4"
	I0813 21:05:42.972923  434426 cri.go:76] found id: "1cf854ac4e58590f5719949ac3de604a490ab8ae41cc5dfec30aaee4cfa86aa1"
	I0813 21:05:42.972928  434426 cri.go:76] found id: "f25d2b1892e38d48bee5b2f604058fa84fc6504d779b29320f01da144a8d3402"
	I0813 21:05:42.972933  434426 cri.go:76] found id: ""
	I0813 21:05:42.972940  434426 cri.go:221] Stopping containers: [a5ca654816273571ad39ae304722652989f2a69e9ccd0256ccf23f4cbc244abd 0d285b2e29499c2e1d9b734b49c97a04b18540b7360ed9e34e8acfd407100d67 cf6143a55b051d9efc422092ace8c862445c4967a18ee739bf39cfad5460448e 1a65e64cbc9f06e6ddf3d6194452927f859afa1b62ed7d907245763f06fec645 f781a92e61ada43905b902c2ac9fca7404b8495aee2af7d7795afb32857f23e4 1cf854ac4e58590f5719949ac3de604a490ab8ae41cc5dfec30aaee4cfa86aa1 f25d2b1892e38d48bee5b2f604058fa84fc6504d779b29320f01da144a8d3402]
	I0813 21:05:42.972988  434426 ssh_runner.go:149] Run: which crictl
	I0813 21:05:42.977301  434426 ssh_runner.go:149] Run: sudo /bin/crictl stop a5ca654816273571ad39ae304722652989f2a69e9ccd0256ccf23f4cbc244abd 0d285b2e29499c2e1d9b734b49c97a04b18540b7360ed9e34e8acfd407100d67 cf6143a55b051d9efc422092ace8c862445c4967a18ee739bf39cfad5460448e 1a65e64cbc9f06e6ddf3d6194452927f859afa1b62ed7d907245763f06fec645 f781a92e61ada43905b902c2ac9fca7404b8495aee2af7d7795afb32857f23e4 1cf854ac4e58590f5719949ac3de604a490ab8ae41cc5dfec30aaee4cfa86aa1 f25d2b1892e38d48bee5b2f604058fa84fc6504d779b29320f01da144a8d3402
	I0813 21:05:43.015098  434426 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:05:43.030422  434426 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:05:43.038207  434426 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:05:43.038258  434426 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:43.046083  434426 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:05:43.046108  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:43.234184  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:44.031394  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:44.286845  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:44.434332  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:05:44.561020  434426 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:05:44.561093  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.072644  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:45.572512  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:46.072935  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:46.572209  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.072820  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:44.070383  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:05:44.070417  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:05:44.070425  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | command : exit 0
	I0813 21:05:44.070435  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | err     : exit status 255
	I0813 21:05:44.070444  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | output  : 
	I0813 21:05:47.070752  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Getting to WaitForSSH function...
	I0813 21:05:47.075748  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.076089  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.076124  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.076266  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH client type: external
	I0813 21:05:47.076301  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa (-rw-------)
	I0813 21:05:47.076341  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:05:47.076360  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | About to run SSH command:
	I0813 21:05:47.076373  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | exit 0
	I0813 21:05:47.209990  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 21:05:47.210279  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetConfigRaw
	I0813 21:05:47.210980  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetIP
	I0813 21:05:47.215599  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.215971  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.216004  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.216197  434502 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/config.json ...
	I0813 21:05:47.216352  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:47.216531  434502 machine.go:88] provisioning docker machine ...
	I0813 21:05:47.216560  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:47.216747  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetMachineName
	I0813 21:05:47.216909  434502 buildroot.go:166] provisioning hostname "embed-certs-20210813210115-393438"
	I0813 21:05:47.216930  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetMachineName
	I0813 21:05:47.217053  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.221366  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.221681  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.221711  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.221783  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:47.221941  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.222076  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.222174  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:47.222331  434502 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:47.222497  434502 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0813 21:05:47.222516  434502 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210813210115-393438 && echo "embed-certs-20210813210115-393438" | sudo tee /etc/hostname
	I0813 21:05:47.350613  434502 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210813210115-393438
	
	I0813 21:05:47.350646  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.355442  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.355764  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.355801  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.355886  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:47.356046  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.356191  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.356328  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:47.356481  434502 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:47.356629  434502 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0813 21:05:47.356649  434502 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210813210115-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210813210115-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210813210115-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:05:47.480637  434502 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:05:47.480667  434502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:05:47.480689  434502 buildroot.go:174] setting up certificates
	I0813 21:05:47.480699  434502 provision.go:83] configureAuth start
	I0813 21:05:47.480708  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetMachineName
	I0813 21:05:47.480943  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetIP
	I0813 21:05:47.485661  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.485926  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.485958  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.486060  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.490062  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.490323  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.490355  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.490435  434502 provision.go:138] copyHostCerts
	I0813 21:05:47.490506  434502 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:05:47.490518  434502 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:05:47.490574  434502 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 21:05:47.490661  434502 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:05:47.490683  434502 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:05:47.490709  434502 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:05:47.490777  434502 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:05:47.490788  434502 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:05:47.490812  434502 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:05:47.490871  434502 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210813210115-393438 san=[192.168.72.95 192.168.72.95 localhost 127.0.0.1 minikube embed-certs-20210813210115-393438]
	I0813 21:05:47.660627  434502 provision.go:172] copyRemoteCerts
	I0813 21:05:47.660682  434502 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:05:47.660708  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.664944  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.665226  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.665254  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.665398  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:47.665520  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.665651  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:47.665742  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:05:47.750640  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:05:47.766712  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 21:05:47.782725  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 21:05:47.797861  434502 provision.go:86] duration metric: configureAuth took 317.153687ms
	I0813 21:05:47.797877  434502 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:05:47.798055  434502 config.go:177] Loaded profile config "embed-certs-20210813210115-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:05:47.798071  434502 machine.go:91] provisioned docker machine in 581.525098ms
	I0813 21:05:47.798080  434502 start.go:267] post-start starting for "embed-certs-20210813210115-393438" (driver="kvm2")
	I0813 21:05:47.798088  434502 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:05:47.798118  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:47.798393  434502 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:05:47.798425  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.802968  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.803268  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.803301  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.803366  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:47.803537  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.803688  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:47.803819  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:05:46.762882  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.263194  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.762994  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.262820  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.762597  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.262687  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.762934  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.262621  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.763234  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:51.262884  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.247753  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.747493  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.247858  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.748175  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.247346  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.747616  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.247612  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.748066  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:51.247916  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:51.748114  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.573012  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.072262  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:48.573005  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.072918  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:49.572925  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.072383  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:50.572832  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:51.072251  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:51.572409  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.072071  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:47.889883  434502 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:05:47.894242  434502 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:05:47.894265  434502 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:05:47.894308  434502 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:05:47.894408  434502 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 21:05:47.894509  434502 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:05:47.900616  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:05:47.916161  434502 start.go:270] post-start completed in 118.068413ms
	I0813 21:05:47.916192  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:47.916429  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:47.921006  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.921302  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:47.921333  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:47.921403  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:47.921548  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.921671  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:47.921788  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:47.921919  434502 main.go:130] libmachine: Using SSH client type: native
	I0813 21:05:47.922054  434502 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.72.95 22 <nil> <nil>}
	I0813 21:05:47.922064  434502 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 21:05:48.039157  434502 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628888747.971364911
	
	I0813 21:05:48.039178  434502 fix.go:212] guest clock: 1628888747.971364911
	I0813 21:05:48.039185  434502 fix.go:225] Guest: 2021-08-13 21:05:47.971364911 +0000 UTC Remote: 2021-08-13 21:05:47.916414238 +0000 UTC m=+45.083995073 (delta=54.950673ms)
	I0813 21:05:48.039202  434502 fix.go:196] guest clock delta is within tolerance: 54.950673ms
	I0813 21:05:48.039209  434502 fix.go:57] fixHost completed within 27.209750115s
	I0813 21:05:48.039214  434502 start.go:80] releasing machines lock for "embed-certs-20210813210115-393438", held for 27.209782445s
	I0813 21:05:48.039259  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:48.039507  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetIP
	I0813 21:05:48.044093  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.044367  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:48.044399  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.044490  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:48.044649  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:48.045063  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:05:48.045313  434502 ssh_runner.go:149] Run: systemctl --version
	I0813 21:05:48.045327  434502 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:05:48.045335  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:48.045360  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:05:48.050777  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.051103  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:48.051134  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.051228  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:48.051380  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:48.051545  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:48.051692  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:05:48.051871  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.052188  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:05:48.052219  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:05:48.052357  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:05:48.052507  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:05:48.052648  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:05:48.052780  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:05:48.173452  434502 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:05:48.173567  434502 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:05:52.198912  434502 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.02531638s)
	I0813 21:05:52.199078  434502 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 21:05:52.199145  434502 ssh_runner.go:149] Run: which lz4
	I0813 21:05:52.204358  434502 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:05:52.209369  434502 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:05:52.209398  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0813 21:05:51.763159  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.263132  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.763171  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:53.263124  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:53.762320  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:54.263218  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:54.762305  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:55.262842  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:55.763095  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:56.263173  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.248079  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.748031  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:53.247898  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:53.747859  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:54.248141  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:54.747850  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:55.247349  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:55.747304  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:56.247906  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:56.748393  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.573023  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:52.599967  434426 api_server.go:70] duration metric: took 8.038946734s to wait for apiserver process to appear ...
	I0813 21:05:52.599996  434426 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:05:52.600009  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:05:52.600694  434426 api_server.go:255] stopped: https://192.168.61.54:8443/healthz: Get "https://192.168.61.54:8443/healthz": dial tcp 192.168.61.54:8443: connect: connection refused
	I0813 21:05:53.101216  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:05:53.101986  434426 api_server.go:255] stopped: https://192.168.61.54:8443/healthz: Get "https://192.168.61.54:8443/healthz": dial tcp 192.168.61.54:8443: connect: connection refused
	I0813 21:05:53.601685  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:05:56.328428  434502 containerd.go:546] Took 4.124102 seconds to copy over tarball
	I0813 21:05:56.328505  434502 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:05:58.602269  434426 api_server.go:255] stopped: https://192.168.61.54:8443/healthz: Get "https://192.168.61.54:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:05:59.101446  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:05:59.226644  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:05:59.226678  434426 api_server.go:101] status: https://192.168.61.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:05:59.601108  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:05:59.611587  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:05:59.611615  434426 api_server.go:101] status: https://192.168.61.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:06:00.101414  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:06:00.119971  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:06:00.120000  434426 api_server.go:101] status: https://192.168.61.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:06:00.601622  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:06:00.623033  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:06:00.623063  434426 api_server.go:101] status: https://192.168.61.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:06:01.101661  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:06:01.109930  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 200:
	ok
	I0813 21:06:01.121432  434426 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:06:01.121468  434426 api_server.go:129] duration metric: took 8.521464894s to wait for apiserver health ...
	I0813 21:06:01.121481  434426 cni.go:93] Creating CNI manager for ""
	I0813 21:06:01.121494  434426 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:05:56.763214  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:57.262238  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:57.762416  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:58.263249  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:58.762893  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:59.262377  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:59.762933  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:00.263263  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:00.762710  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:01.262528  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:57.247736  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:57.747851  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:58.247940  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:58.748248  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:59.247931  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:05:59.747954  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:00.247906  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:00.747960  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:01.247883  434036 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:01.262304  434036 api_server.go:70] duration metric: took 54.026388934s to wait for apiserver process to appear ...
	I0813 21:06:01.262331  434036 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:06:01.262343  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:01.123271  434426 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:06:01.123346  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:06:01.137091  434426 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:06:01.193557  434426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:06:01.214072  434426 system_pods.go:59] 8 kube-system pods found
	I0813 21:06:01.214117  434426 system_pods.go:61] "coredns-78fcd69978-f47dd" [4aec428d-547b-4d87-bc39-78bbccb8baea] Running
	I0813 21:06:01.214125  434426 system_pods.go:61] "etcd-no-preload-20210813210044-393438" [7a80ae51-8063-4d28-8ccb-c0cfcfe14c33] Running
	I0813 21:06:01.214131  434426 system_pods.go:61] "kube-apiserver-no-preload-20210813210044-393438" [db01fbc4-e895-4457-bff3-53cff9d0699a] Running
	I0813 21:06:01.214146  434426 system_pods.go:61] "kube-controller-manager-no-preload-20210813210044-393438" [035b8aaf-5080-423b-844e-4f0a28bd0c3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 21:06:01.214154  434426 system_pods.go:61] "kube-proxy-jl8gn" [20fe4049-f327-444e-8e06-19de55971a1e] Running
	I0813 21:06:01.214165  434426 system_pods.go:61] "kube-scheduler-no-preload-20210813210044-393438" [3e93dca1-6885-4de5-8d71-8597dab2a441] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 21:06:01.214175  434426 system_pods.go:61] "metrics-server-7c784ccb57-9bt6z" [17511551-ab42-48c3-adf3-3221e19fc573] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:06:01.214187  434426 system_pods.go:61] "storage-provisioner" [9488dce9-830b-44a7-93d1-cfb9d1d96514] Running
	I0813 21:06:01.214196  434426 system_pods.go:74] duration metric: took 20.612475ms to wait for pod list to return data ...
	I0813 21:06:01.214210  434426 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:06:01.222653  434426 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:06:01.222708  434426 node_conditions.go:123] node cpu capacity is 2
	I0813 21:06:01.222727  434426 node_conditions.go:105] duration metric: took 8.511383ms to run NodePressure ...
	I0813 21:06:01.222759  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:01.762658  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:05.763242  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:06.263188  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:05.242867  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:06:05.255235  434036 api_server.go:101] status: https://192.168.83.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:06:05.756016  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:07.324246  434502 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.995714396s)
	I0813 21:06:07.324273  434502 containerd.go:553] Took 10.995815 seconds t extract the tarball
	I0813 21:06:07.324286  434502 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:06:07.386418  434502 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:06:07.538846  434502 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:06:07.603956  434502 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 21:06:07.647754  434502 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 21:06:07.660926  434502 docker.go:153] disabling docker service ...
	I0813 21:06:07.660980  434502 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:06:07.672885  434502 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:06:07.684627  434502 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:06:07.818921  434502 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:06:07.320076  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 21:06:07.320113  434036 api_server.go:101] status: https://192.168.83.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 21:06:07.755543  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:07.994267  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 21:06:07.994311  434036 api_server.go:101] status: https://192.168.83.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 21:06:08.255650  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:08.313630  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 21:06:08.313672  434036 api_server.go:101] status: https://192.168.83.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 21:06:08.756343  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:08.766284  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0813 21:06:08.766312  434036 api_server.go:101] status: https://192.168.83.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0813 21:06:09.255426  434036 api_server.go:239] Checking apiserver healthz at https://192.168.83.180:8443/healthz ...
	I0813 21:06:09.264852  434036 api_server.go:265] https://192.168.83.180:8443/healthz returned 200:
	ok
	I0813 21:06:09.275355  434036 api_server.go:139] control plane version: v1.14.0
	I0813 21:06:09.275378  434036 api_server.go:129] duration metric: took 8.013041101s to wait for apiserver health ...
	I0813 21:06:09.275391  434036 cni.go:93] Creating CNI manager for ""
	I0813 21:06:09.275400  434036 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:06:07.977070  434502 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:06:07.994507  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:06:08.014648  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 21:06:08.032466  434502 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:06:08.041649  434502 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:06:08.041710  434502 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:06:08.067719  434502 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:06:08.077684  434502 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:06:08.247330  434502 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:06:08.315801  434502 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 21:06:08.315881  434502 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:06:08.322446  434502 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 21:06:09.427340  434502 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:06:09.434702  434502 start.go:413] Will wait 60s for crictl version
	I0813 21:06:09.434773  434502 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:06:09.475662  434502 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 21:06:09.475749  434502 ssh_runner.go:149] Run: containerd --version
	I0813 21:06:09.516486  434502 ssh_runner.go:149] Run: containerd --version
	I0813 21:06:06.763215  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:07.263058  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:07.762852  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:08.262800  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:08.763094  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:09.262916  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:09.762562  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:10.262282  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:10.763196  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:11.263123  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:09.277075  434036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:06:09.277166  434036 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:06:09.297330  434036 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:06:09.320632  434036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:06:09.333565  434036 system_pods.go:59] 8 kube-system pods found
	I0813 21:06:09.333597  434036 system_pods.go:61] "coredns-fb8b8dccf-sgnld" [a92bab36-fc79-11eb-9c66-525400553b5e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:06:09.333602  434036 system_pods.go:61] "etcd-old-k8s-version-20210813205952-393438" [ccce6bed-fc79-11eb-9c66-525400553b5e] Running
	I0813 21:06:09.333609  434036 system_pods.go:61] "kube-apiserver-old-k8s-version-20210813205952-393438" [cb9d3317-fc79-11eb-9c66-525400553b5e] Running
	I0813 21:06:09.333616  434036 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210813205952-393438" [bff107c1-fc79-11eb-9c66-525400553b5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0813 21:06:09.333623  434036 system_pods.go:61] "kube-proxy-zrnsp" [a94a53aa-fc79-11eb-9c66-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 21:06:09.333629  434036 system_pods.go:61] "kube-scheduler-old-k8s-version-20210813205952-393438" [c809ace6-fc79-11eb-9c66-525400553b5e] Running
	I0813 21:06:09.333635  434036 system_pods.go:61] "metrics-server-8546d8b77b-dm4n5" [d477f50c-fc79-11eb-9c66-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:06:09.333645  434036 system_pods.go:61] "storage-provisioner" [aaf35a18-fc79-11eb-9c66-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:06:09.333651  434036 system_pods.go:74] duration metric: took 12.999255ms to wait for pod list to return data ...
	I0813 21:06:09.333661  434036 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:06:09.338462  434036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:06:09.338518  434036 node_conditions.go:123] node cpu capacity is 2
	I0813 21:06:09.338577  434036 node_conditions.go:105] duration metric: took 4.908525ms to run NodePressure ...
	I0813 21:06:09.338598  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:09.917816  434036 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:06:09.932993  434036 kubeadm.go:746] kubelet initialised
	I0813 21:06:09.933021  434036 kubeadm.go:747] duration metric: took 15.176768ms waiting for restarted kubelet to initialise ...
	I0813 21:06:09.933032  434036 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:06:09.948756  434036 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:12.000537  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:09.166061  434426 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (7.94326905s)
	I0813 21:06:09.166108  434426 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:06:09.176349  434426 kubeadm.go:746] kubelet initialised
	I0813 21:06:09.176372  434426 kubeadm.go:747] duration metric: took 10.253817ms waiting for restarted kubelet to initialise ...
	I0813 21:06:09.176382  434426 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:06:09.203731  434426 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-f47dd" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:11.323618  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:09.561886  434502 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 21:06:09.561939  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetIP
	I0813 21:06:09.568523  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:06:09.568978  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:06:09.569057  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:06:09.569224  434502 ssh_runner.go:149] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0813 21:06:09.574609  434502 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:06:09.587637  434502 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:06:09.587717  434502 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:06:09.631577  434502 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:06:09.631608  434502 containerd.go:517] Images already preloaded, skipping extraction
	I0813 21:06:09.631667  434502 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:06:09.676879  434502 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:06:09.676920  434502 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:06:09.676977  434502 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:06:09.724335  434502 cni.go:93] Creating CNI manager for ""
	I0813 21:06:09.724374  434502 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:06:09.724388  434502 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:06:09.724408  434502 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.95 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210813210115-393438 NodeName:embed-certs-20210813210115-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.72.95 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:06:09.724698  434502 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210813210115-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:06:09.724884  434502 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210813210115-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.95 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813210115-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:06:09.724996  434502 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 21:06:09.739481  434502 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:06:09.739570  434502 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:06:09.748730  434502 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (548 bytes)
	I0813 21:06:09.765998  434502 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:06:09.782632  434502 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2086 bytes)
	I0813 21:06:09.799961  434502 ssh_runner.go:149] Run: grep 192.168.72.95	control-plane.minikube.internal$ /etc/hosts
	I0813 21:06:09.804846  434502 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:06:09.818569  434502 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438 for IP: 192.168.72.95
	I0813 21:06:09.818631  434502 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:06:09.818655  434502 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:06:09.818749  434502 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/client.key
	I0813 21:06:09.818782  434502 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/apiserver.key.a2bb46f7
	I0813 21:06:09.818808  434502 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/proxy-client.key
	I0813 21:06:09.818950  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:06:09.819005  434502 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:06:09.819020  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:06:09.819060  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:06:09.819095  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:06:09.819129  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:06:09.819196  434502 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:06:09.820623  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:06:09.843444  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:06:09.865562  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:06:09.888012  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/embed-certs-20210813210115-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 21:06:09.908367  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:06:09.931692  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:06:09.955112  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:06:09.977220  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:06:09.996735  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:06:10.017440  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:06:10.036588  434502 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:06:10.057107  434502 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:06:10.071022  434502 ssh_runner.go:149] Run: openssl version
	I0813 21:06:10.078646  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:06:10.088543  434502 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:06:10.093262  434502 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:06:10.093314  434502 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:06:10.101242  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:06:10.111569  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:06:10.121721  434502 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:06:10.128059  434502 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:06:10.128117  434502 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:06:10.136057  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:06:10.145811  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:06:10.155469  434502 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:06:10.161508  434502 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:06:10.161558  434502 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:06:10.170155  434502 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:06:10.181353  434502 kubeadm.go:390] StartCluster: {Name:embed-certs-20210813210115-393438 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21
.3 ClusterName:embed-certs-20210813210115-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:06:10.181479  434502 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:06:10.181532  434502 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:06:10.226656  434502 cri.go:76] found id: ""
	I0813 21:06:10.226748  434502 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:06:10.238541  434502 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 21:06:10.238572  434502 kubeadm.go:600] restartCluster start
	I0813 21:06:10.238631  434502 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 21:06:10.248775  434502 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:10.250208  434502 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210813210115-393438" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:06:10.250844  434502 kubeconfig.go:128] "embed-certs-20210813210115-393438" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig - will repair!
	I0813 21:06:10.251920  434502 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:06:10.255366  434502 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 21:06:10.266016  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:10.266081  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:10.280298  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:10.480725  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:10.480833  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:10.494154  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:10.681384  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:10.681480  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:10.693111  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:10.880851  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:10.880941  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:10.893836  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.080959  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:11.081031  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:11.091638  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.280934  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:11.281012  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:11.291102  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.481367  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:11.481434  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:11.490618  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.680878  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:11.680955  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:11.690761  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.881063  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:11.881137  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:11.890865  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:12.081085  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:12.081173  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:12.091155  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:12.280366  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:12.280439  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:12.290184  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:12.481389  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:12.481488  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:12.492213  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:12.680455  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:12.680522  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:12.690091  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:11.762406  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:12.263211  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:12.763026  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:13.262897  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:13.762877  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:14.263085  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:14.762761  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:15.262346  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:15.762737  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:16.262977  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:14.001704  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:16.501470  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:13.821161  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:15.821346  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:12.880681  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:12.880762  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:12.893122  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:13.081147  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:13.081221  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:13.090555  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:13.280837  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:13.280909  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:13.291851  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:13.291871  434502 api_server.go:164] Checking apiserver status ...
	I0813 21:06:13.291911  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0813 21:06:13.300667  434502 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0813 21:06:13.300686  434502 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0813 21:06:13.300694  434502 kubeadm.go:1032] stopping kube-system containers ...
	I0813 21:06:13.300757  434502 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:06:13.300806  434502 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:06:13.337181  434502 cri.go:76] found id: ""
	I0813 21:06:13.337237  434502 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 21:06:13.351877  434502 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:06:13.360342  434502 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:06:13.360395  434502 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:06:13.369326  434502 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 21:06:13.369345  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:13.612776  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:14.263028  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:14.506952  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:14.637981  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:14.761221  434502 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:06:14.761297  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:15.276033  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:15.775557  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:16.275444  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:16.775736  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:17.275892  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:17.775800  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:16.763027  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:17.262268  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:17.762930  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:18.262889  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:18.762606  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:19.262504  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:19.762764  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:20.262424  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:20.763080  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:21.263257  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:18.502462  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.505177  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.322424  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:20.817641  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:18.276249  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:18.775759  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:19.276156  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:19.775649  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:20.275540  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:20.776363  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:21.276085  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:21.775549  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:22.275554  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:22.775686  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:21.762982  434236 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:21.785282  434236 api_server.go:70] duration metric: took 48.539827542s to wait for apiserver process to appear ...
	I0813 21:06:21.785310  434236 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:06:21.785322  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:21.785942  434236 api_server.go:255] stopped: https://192.168.39.163:8444/healthz: Get "https://192.168.39.163:8444/healthz": dial tcp 192.168.39.163:8444: connect: connection refused
	I0813 21:06:22.286681  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:26.246872  434236 api_server.go:265] https://192.168.39.163:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:06:26.246922  434236 api_server.go:101] status: https://192.168.39.163:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:06:26.287144  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:26.298988  434236 api_server.go:265] https://192.168.39.163:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:06:26.299009  434236 api_server.go:101] status: https://192.168.39.163:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:06:23.003616  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.004288  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:22.819140  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:25.318572  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:26.786868  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:26.793342  434236 api_server.go:265] https://192.168.39.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:06:26.793366  434236 api_server.go:101] status: https://192.168.39.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:06:27.286691  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:27.292784  434236 api_server.go:265] https://192.168.39.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:06:27.292812  434236 api_server.go:101] status: https://192.168.39.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:06:27.786271  434236 api_server.go:239] Checking apiserver healthz at https://192.168.39.163:8444/healthz ...
	I0813 21:06:27.792679  434236 api_server.go:265] https://192.168.39.163:8444/healthz returned 200:
	ok
	I0813 21:06:27.799994  434236 api_server.go:139] control plane version: v1.21.3
	I0813 21:06:27.800012  434236 api_server.go:129] duration metric: took 6.014696522s to wait for apiserver health ...
	I0813 21:06:27.800022  434236 cni.go:93] Creating CNI manager for ""
	I0813 21:06:27.800062  434236 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:06:23.276286  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:23.776368  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:24.276462  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:24.775975  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:25.275957  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:25.775561  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:26.276428  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:26.775547  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:27.275581  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:27.775541  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:27.802064  434236 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:06:27.802121  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:06:27.809702  434236 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:06:27.826306  434236 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:06:27.841765  434236 system_pods.go:59] 8 kube-system pods found
	I0813 21:06:27.841793  434236 system_pods.go:61] "coredns-558bd4d5db-pgvfh" [1a07397c-0aca-43f9-a2b7-36a6b02771a5] Running
	I0813 21:06:27.841801  434236 system_pods.go:61] "etcd-default-k8s-different-port-20210813210121-393438" [7113d745-dc4c-4fce-afc4-d66d374933cb] Running
	I0813 21:06:27.841810  434236 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210813210121-393438" [f83aa51e-8e79-4280-8897-8762c33cfc4c] Running
	I0813 21:06:27.841816  434236 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210813210121-393438" [4a639055-ca6a-4a71-b697-f77eb4ede3a1] Running
	I0813 21:06:27.841822  434236 system_pods.go:61] "kube-proxy-59w6c" [61f4a377-504a-4826-a20b-3afdcb247fd6] Running
	I0813 21:06:27.841827  434236 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210813210121-393438" [6734eb94-94d5-4b97-9f3e-4090a1456e78] Running
	I0813 21:06:27.841838  434236 system_pods.go:61] "metrics-server-7c784ccb57-x428n" [67da7c22-bd45-4039-82bb-40a3de84b60f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:06:27.841848  434236 system_pods.go:61] "storage-provisioner" [f0d06d4f-d8e4-4be1-a716-153b7e89f6e4] Running
	I0813 21:06:27.841856  434236 system_pods.go:74] duration metric: took 15.534941ms to wait for pod list to return data ...
	I0813 21:06:27.841865  434236 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:06:27.846952  434236 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:06:27.846983  434236 node_conditions.go:123] node cpu capacity is 2
	I0813 21:06:27.847036  434236 node_conditions.go:105] duration metric: took 5.164912ms to run NodePressure ...
	I0813 21:06:27.847053  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:06:28.216153  434236 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:06:28.221385  434236 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0813 21:06:28.586960  434236 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0813 21:06:29.030426  434236 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0813 21:06:29.563354  434236 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0813 21:06:30.351601  434236 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0813 21:06:27.503099  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:30.001034  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:32.001820  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:27.818355  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:29.818740  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:31.818830  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:28.276273  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:28.776262  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:29.276050  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:29.776362  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:30.275521  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:30.776092  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:31.276357  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:31.776415  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:32.275529  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:32.776092  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:31.861131  434236 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0813 21:06:32.940988  434236 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0813 21:06:34.820241  434236 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0813 21:06:34.505062  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:36.505849  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.819176  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:35.820987  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:33.276254  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:33.776368  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:34.276134  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:34.776202  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:35.275686  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:35.775829  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:36.276559  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:36.776220  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:37.275740  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:37.775808  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:37.376657  434236 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0813 21:06:39.006116  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:41.006424  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:37.829656  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:40.319134  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:38.275515  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:38.776398  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:39.276039  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:39.775524  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:40.275880  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:40.775535  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:41.275713  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:41.776051  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:42.276097  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:42.776117  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:42.515553  434236 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0813 21:06:42.519498  434036 pod_ready.go:92] pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.519524  434036 pod_ready.go:81] duration metric: took 32.570739003s waiting for pod "coredns-fb8b8dccf-sgnld" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.519538  434036 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.534376  434036 pod_ready.go:92] pod "etcd-old-k8s-version-20210813205952-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.534400  434036 pod_ready.go:81] duration metric: took 14.853692ms waiting for pod "etcd-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.534413  434036 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.542051  434036 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210813205952-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.542072  434036 pod_ready.go:81] duration metric: took 7.649854ms waiting for pod "kube-apiserver-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.542085  434036 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.551759  434036 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210813205952-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.551778  434036 pod_ready.go:81] duration metric: took 9.684667ms waiting for pod "kube-controller-manager-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.551790  434036 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zrnsp" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.558134  434036 pod_ready.go:92] pod "kube-proxy-zrnsp" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.558151  434036 pod_ready.go:81] duration metric: took 6.353431ms waiting for pod "kube-proxy-zrnsp" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.558161  434036 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.898183  434036 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210813205952-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:42.898209  434036 pod_ready.go:81] duration metric: took 340.039042ms waiting for pod "kube-scheduler-old-k8s-version-20210813205952-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:42.898220  434036 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:45.308578  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:42.321737  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:44.816826  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:43.276237  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:43.775821  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:44.276105  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:44.775884  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:45.275614  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:45.775627  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:46.276396  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:46.776197  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:47.275874  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:47.776502  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:47.807536  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:50.308695  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:47.318944  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:49.821979  434426 pod_ready.go:102] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:50.318268  434426 pod_ready.go:92] pod "coredns-78fcd69978-f47dd" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.318297  434426 pod_ready.go:81] duration metric: took 41.114530272s waiting for pod "coredns-78fcd69978-f47dd" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.318311  434426 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.325569  434426 pod_ready.go:92] pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.325586  434426 pod_ready.go:81] duration metric: took 7.26781ms waiting for pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.325598  434426 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.334259  434426 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.334302  434426 pod_ready.go:81] duration metric: took 8.696424ms waiting for pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.334315  434426 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.344213  434426 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.344233  434426 pod_ready.go:81] duration metric: took 9.907594ms waiting for pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.344246  434426 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jl8gn" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.365993  434426 pod_ready.go:92] pod "kube-proxy-jl8gn" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.366014  434426 pod_ready.go:81] duration metric: took 21.760778ms waiting for pod "kube-proxy-jl8gn" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.366026  434426 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.713844  434426 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:06:50.713867  434426 pod_ready.go:81] duration metric: took 347.831128ms waiting for pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:50.713880  434426 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace to be "Ready" ...
	I0813 21:06:48.275626  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:48.775745  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:49.275564  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:49.775902  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:50.275938  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:50.775797  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:51.275647  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:51.776313  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:52.275795  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:52.775934  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:52.282208  434236 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0813 21:06:52.805669  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:54.807200  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:56.807643  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:53.151574  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:55.621726  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:53.275576  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:53.775720  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:54.276020  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:54.775811  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:55.275566  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:55.775883  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:56.276445  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:56.775931  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:57.276228  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:57.775511  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:58.807964  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:01.309268  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:57.622203  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:59.624586  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:02.126220  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:06:58.275484  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:58.776166  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:59.275550  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:06:59.775542  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:00.275745  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:00.775927  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:01.276255  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:01.776408  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:02.275616  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:02.775862  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:03.807156  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.308695  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:04.130466  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:06.625720  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:03.276472  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:03.775573  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:04.275590  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:04.776446  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:05.275752  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:05.775925  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:06.276144  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:06.775472  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:07.275532  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:07.775754  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:11.229359  434236 kubeadm.go:746] kubelet initialised
	I0813 21:07:11.229389  434236 kubeadm.go:747] duration metric: took 43.013205159s waiting for restarted kubelet to initialise ...
	I0813 21:07:11.229401  434236 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:11.238577  434236 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-pgvfh" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:11.251014  434236 pod_ready.go:92] pod "coredns-558bd4d5db-pgvfh" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:11.251035  434236 pod_ready.go:81] duration metric: took 12.426366ms waiting for pod "coredns-558bd4d5db-pgvfh" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:11.251047  434236 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:11.255674  434236 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:11.255691  434236 pod_ready.go:81] duration metric: took 4.635703ms waiting for pod "etcd-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:11.255706  434236 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:08.806115  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:10.807534  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:09.120975  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:11.123527  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:08.276185  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:08.776211  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:09.276442  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:09.775975  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:07:09.790545  434502 api_server.go:70] duration metric: took 55.029322515s to wait for apiserver process to appear ...
	I0813 21:07:09.790626  434502 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:07:09.790646  434502 api_server.go:239] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0813 21:07:09.792152  434502 api_server.go:255] stopped: https://192.168.72.95:8443/healthz: Get "https://192.168.72.95:8443/healthz": dial tcp 192.168.72.95:8443: connect: connection refused
	I0813 21:07:10.293011  434502 api_server.go:239] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0813 21:07:14.475767  434502 api_server.go:265] https://192.168.72.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:07:14.475812  434502 api_server.go:101] status: https://192.168.72.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:07:14.793218  434502 api_server.go:239] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0813 21:07:14.798556  434502 api_server.go:265] https://192.168.72.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:07:14.798580  434502 api_server.go:101] status: https://192.168.72.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:07:15.292846  434502 api_server.go:239] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0813 21:07:15.300973  434502 api_server.go:265] https://192.168.72.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:07:15.300998  434502 api_server.go:101] status: https://192.168.72.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:07:15.792511  434502 api_server.go:239] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0813 21:07:15.804455  434502 api_server.go:265] https://192.168.72.95:8443/healthz returned 200:
	ok
	I0813 21:07:15.814303  434502 api_server.go:139] control plane version: v1.21.3
	I0813 21:07:15.814327  434502 api_server.go:129] duration metric: took 6.02368902s to wait for apiserver health ...
	I0813 21:07:15.814340  434502 cni.go:93] Creating CNI manager for ""
	I0813 21:07:15.814348  434502 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:07:13.272296  434236 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:14.269360  434236 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:14.269392  434236 pod_ready.go:81] duration metric: took 3.01367661s waiting for pod "kube-apiserver-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:14.269407  434236 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:16.284244  434236 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.309553  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.808412  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:13.124394  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.136794  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:15.815910  434502 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:07:15.815980  434502 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:07:15.824375  434502 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:07:15.839198  434502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:07:15.859757  434502 system_pods.go:59] 8 kube-system pods found
	I0813 21:07:15.859784  434502 system_pods.go:61] "coredns-558bd4d5db-pt8qp" [4b80cbcb-3806-4176-9407-d1052b959548] Running
	I0813 21:07:15.859789  434502 system_pods.go:61] "etcd-embed-certs-20210813210115-393438" [73d1aa71-d312-44ce-aa0e-c6d79153b7c5] Running
	I0813 21:07:15.859794  434502 system_pods.go:61] "kube-apiserver-embed-certs-20210813210115-393438" [28e2fb79-3ee9-4880-a26e-231c3f384c1c] Running
	I0813 21:07:15.859798  434502 system_pods.go:61] "kube-controller-manager-embed-certs-20210813210115-393438" [8cd94f0c-7cc5-43af-91e1-86960d354db9] Running
	I0813 21:07:15.859805  434502 system_pods.go:61] "kube-proxy-kjphp" [38a3daef-9d16-4d30-a285-859858eb75fb] Running
	I0813 21:07:15.859811  434502 system_pods.go:61] "kube-scheduler-embed-certs-20210813210115-393438" [b7048830-cba7-4f74-9143-0df360f72f9d] Running
	I0813 21:07:15.859823  434502 system_pods.go:61] "metrics-server-7c784ccb57-8nk4r" [866b08d2-ac27-4bea-a139-ef4bd73f01c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:07:15.859841  434502 system_pods.go:61] "storage-provisioner" [7bf768d3-0513-4a9d-a42f-632676795045] Running
	I0813 21:07:15.859848  434502 system_pods.go:74] duration metric: took 20.635509ms to wait for pod list to return data ...
	I0813 21:07:15.859859  434502 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:07:15.863654  434502 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:07:15.863685  434502 node_conditions.go:123] node cpu capacity is 2
	I0813 21:07:15.863703  434502 node_conditions.go:105] duration metric: took 3.834933ms to run NodePressure ...
	I0813 21:07:15.863721  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:07:16.193321  434502 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 21:07:16.199509  434502 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0813 21:07:16.566486  434502 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0813 21:07:17.014911  434502 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0813 21:07:17.548331  434502 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0813 21:07:18.284613  434236 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:18.784551  434236 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:18.784587  434236 pod_ready.go:81] duration metric: took 4.515169958s waiting for pod "kube-controller-manager-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:18.784605  434236 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-59w6c" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:18.789934  434236 pod_ready.go:92] pod "kube-proxy-59w6c" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:18.789957  434236 pod_ready.go:81] duration metric: took 5.344262ms waiting for pod "kube-proxy-59w6c" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:18.789969  434236 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:18.795694  434236 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:07:18.795714  434236 pod_ready.go:81] duration metric: took 5.73581ms waiting for pod "kube-scheduler-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:18.795724  434236 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace to be "Ready" ...
	I0813 21:07:20.811739  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:18.306423  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.311424  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:17.625434  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:20.132243  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:18.336752  434502 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0813 21:07:19.861088  434502 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0813 21:07:20.942258  434502 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0813 21:07:22.818506  434502 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0813 21:07:23.311525  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:25.314360  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.806784  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:25.308634  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:22.624709  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:24.625595  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:27.120921  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:25.380548  434502 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0813 21:07:27.318633  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.812304  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:27.808395  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.808591  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:29.126100  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:31.621750  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:30.518560  434502 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0813 21:07:31.812364  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.813572  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.311298  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:32.307070  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:34.319179  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.808675  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:33.622811  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:36.126056  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.311412  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.312272  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.809354  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:41.307236  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:38.126184  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.128899  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:40.283114  434502 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0813 21:07:42.810757  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.310911  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:43.808043  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:46.305820  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:42.621376  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:45.122604  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:47.124604  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:47.312196  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.315617  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:48.307988  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:50.309212  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:49.621521  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:51.624309  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:51.810709  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:53.823425  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:56.311568  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:52.807052  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:55.305029  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:54.122426  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:56.125762  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:58.812506  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:01.313636  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:57.308380  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.809491  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:58.130929  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:00.133009  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:07:59.231806  434502 kubeadm.go:746] kubelet initialised
	I0813 21:07:59.231828  434502 kubeadm.go:747] duration metric: took 43.038475839s waiting for restarted kubelet to initialise ...
	I0813 21:07:59.231845  434502 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:07:59.239711  434502 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:01.276611  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.812222  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:06.310423  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:02.308286  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:04.308778  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:06.805247  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:02.621598  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:04.622355  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:06.622525  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:03.277406  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:05.783084  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.310901  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:10.312505  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.805886  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:10.807381  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.623877  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:11.122714  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:08.277357  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:10.281005  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:12.774931  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:12.809835  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:14.814406  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.307768  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.309217  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:13.123158  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:15.620492  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:14.775335  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.275884  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.311309  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.311627  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.806855  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.807892  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:21.808757  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:17.638499  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:20.126993  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:22.128857  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:19.276049  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:21.280006  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:21.810244  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:23.810572  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:25.812501  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.308611  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:26.805692  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:24.627607  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:27.122261  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:23.775905  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:26.276141  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:28.310324  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.312054  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:28.806282  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:31.307435  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:29.129014  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:31.622549  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:28.284915  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:30.774964  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:32.775965  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:32.809772  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:34.810265  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:33.809848  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:36.305688  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:34.130011  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:36.622943  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:34.781489  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:37.275186  434502 pod_ready.go:102] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:36.811560  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:38.813295  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:40.815292  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:38.308742  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:40.805956  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:38.623795  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:41.123452  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:38.275332  434502 pod_ready.go:92] pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.275365  434502 pod_ready.go:81] duration metric: took 39.035626612s waiting for pod "coredns-558bd4d5db-pt8qp" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.275379  434502 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.281005  434502 pod_ready.go:92] pod "etcd-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.281022  434502 pod_ready.go:81] duration metric: took 5.63355ms waiting for pod "etcd-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.281034  434502 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.287755  434502 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.287768  434502 pod_ready.go:81] duration metric: took 6.727485ms waiting for pod "kube-apiserver-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.287777  434502 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.295981  434502 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.295996  434502 pod_ready.go:81] duration metric: took 8.211264ms waiting for pod "kube-controller-manager-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.296006  434502 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kjphp" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.302322  434502 pod_ready.go:92] pod "kube-proxy-kjphp" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.302358  434502 pod_ready.go:81] duration metric: took 6.3457ms waiting for pod "kube-proxy-kjphp" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.302369  434502 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.674125  434502 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:08:38.674147  434502 pod_ready.go:81] duration metric: took 371.76822ms waiting for pod "kube-scheduler-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:38.674161  434502 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace to be "Ready" ...
	I0813 21:08:41.079821  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:43.310251  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:45.311095  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:42.808973  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:44.809136  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:43.623697  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:46.127208  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:43.080389  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:45.583078  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:47.586577  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:47.811216  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:49.811374  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:47.306709  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:49.310656  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:51.807825  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:48.133154  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:50.136763  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:50.079908  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:52.083234  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:51.811653  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:54.315282  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:53.812570  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:56.307726  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:52.622320  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:54.624020  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:57.125236  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:54.580777  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:56.581624  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:56.812257  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:59.311620  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:01.312173  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:58.310307  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:00.805499  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:59.126399  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:01.622100  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:08:59.080603  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:01.081108  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:03.817167  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:06.309418  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:02.807319  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:05.309772  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:04.130586  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:06.621919  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:03.087920  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:05.579200  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:07.579537  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:08.317286  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.811532  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:07.807745  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.306939  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:08.622411  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.626074  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:10.082379  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:12.086276  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:13.311319  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:15.311868  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:12.308511  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:14.806605  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:16.807320  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:13.122716  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:15.622820  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:14.583833  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:17.085575  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:17.811201  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:20.312158  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:19.307729  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.308201  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:17.623511  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:19.630537  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:22.126818  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:19.580074  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:21.581281  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:22.313523  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:24.809941  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:23.311738  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:25.808963  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:24.128731  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:26.620247  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:23.581660  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:26.081792  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:26.811628  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:29.311318  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:28.307368  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:30.808816  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:28.622166  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:30.622845  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:28.584687  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:31.080501  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:31.812694  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:34.313072  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:33.307792  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:35.308295  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:33.121915  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:35.122855  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:37.123919  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:33.582420  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:36.081160  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:36.812368  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:39.313851  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:37.308725  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:39.309348  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:41.809891  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:39.124768  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:41.125582  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:38.082318  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:40.580574  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:42.589121  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:41.810962  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:44.310310  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:46.311501  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:44.306795  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:46.309861  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:43.128388  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:45.623794  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:45.079577  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:47.081075  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:48.315701  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:50.811121  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:48.811244  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:51.308695  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:48.127029  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:50.132607  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:49.084131  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:51.085632  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:53.313563  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:55.812263  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:53.806483  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:56.308595  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:52.621447  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:54.622461  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:56.622576  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:53.581563  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:55.584272  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:58.311305  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:00.312205  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:58.808808  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:01.310736  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:59.122709  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:01.183950  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:09:58.082114  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:00.580651  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:02.582962  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:02.812266  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:04.813638  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:03.810231  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:06.307885  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:03.628234  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:06.128265  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:04.583232  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:07.087258  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:07.313525  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:09.812384  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:08.308710  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:10.807681  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:08.623417  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:11.120889  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:09.087903  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:11.579908  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:11.815424  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:14.311027  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:12.808163  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:15.307806  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:13.124839  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:15.622827  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:13.584071  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:15.644677  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:16.812994  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:18.813605  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:21.311156  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:17.808726  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:19.808863  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:18.128710  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:20.622257  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:18.080644  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:20.086210  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:22.579739  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:23.312656  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:25.811861  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:22.309509  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:24.810625  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:22.626308  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:25.131799  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:24.581224  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:27.081507  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:28.311799  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:30.818179  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:27.306016  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:29.307701  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:31.807555  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:27.623183  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:29.624183  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:32.131508  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:29.083248  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:31.586828  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:33.310454  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:35.311492  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:34.307390  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:36.308357  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:34.622202  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:36.622387  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:33.591286  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:36.081362  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:37.809092  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:39.810884  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:38.807809  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:40.808110  434036 pod_ready.go:102] pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:39.128048  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:41.623162  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:38.083908  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:40.584247  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:41.811080  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:43.815387  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:46.310171  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:43.299572  434036 pod_ready.go:81] duration metric: took 4m0.401335464s waiting for pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace to be "Ready" ...
	E0813 21:10:43.299598  434036 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-dm4n5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:10:43.299620  434036 pod_ready.go:38] duration metric: took 4m33.366575794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:43.299662  434036 kubeadm.go:604] restartCluster took 5m40.655534371s
	W0813 21:10:43.299904  434036 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:10:43.300024  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 21:10:46.304680  434036 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.004628264s)
	I0813 21:10:46.304745  434036 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:10:46.318447  434036 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:10:46.318523  434036 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:10:46.352096  434036 cri.go:76] found id: "f57d117554fe3223ad39b5aaa25d48ea6cc1db88b62c7dc8ca31efbff358f0f7"
	I0813 21:10:46.352124  434036 cri.go:76] found id: "b0b0d0c50df023fb7aa8711c6e6a8a073522ac78bf040db5cf50faee00f31010"
	I0813 21:10:46.352131  434036 cri.go:76] found id: ""
	W0813 21:10:46.352140  434036 kubeadm.go:840] found 2 kube-system containers to stop
	I0813 21:10:46.352190  434036 cri.go:221] Stopping containers: [f57d117554fe3223ad39b5aaa25d48ea6cc1db88b62c7dc8ca31efbff358f0f7 b0b0d0c50df023fb7aa8711c6e6a8a073522ac78bf040db5cf50faee00f31010]
	I0813 21:10:46.352250  434036 ssh_runner.go:149] Run: which crictl
	I0813 21:10:46.356519  434036 ssh_runner.go:149] Run: sudo /bin/crictl stop f57d117554fe3223ad39b5aaa25d48ea6cc1db88b62c7dc8ca31efbff358f0f7 b0b0d0c50df023fb7aa8711c6e6a8a073522ac78bf040db5cf50faee00f31010
	I0813 21:10:46.389495  434036 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:10:46.397140  434036 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:10:46.406441  434036 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:10:46.406489  434036 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0813 21:10:46.994208  434036 out.go:204]   - Generating certificates and keys ...
	I0813 21:10:44.137716  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:46.623167  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:43.080064  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:45.080782  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:47.085711  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:48.311498  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:50.322271  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:48.095926  434036 out.go:204]   - Booting up control plane ...
	I0813 21:10:48.623391  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:50.624889  434426 pod_ready.go:102] pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:51.113278  434426 pod_ready.go:81] duration metric: took 4m0.399380697s waiting for pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace to be "Ready" ...
	E0813 21:10:51.113311  434426 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-9bt6z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:10:51.113332  434426 pod_ready.go:38] duration metric: took 4m41.936938903s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:10:51.113362  434426 kubeadm.go:604] restartCluster took 5m11.230222626s
	W0813 21:10:51.113488  434426 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:10:51.113526  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 21:10:49.584163  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:52.081779  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:54.882001  434426 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.768438465s)
	I0813 21:10:54.882088  434426 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:10:54.898511  434426 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:10:54.898580  434426 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:10:54.936468  434426 cri.go:76] found id: ""
	I0813 21:10:54.936558  434426 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:10:54.945597  434426 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:10:54.953577  434426 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:10:54.953617  434426 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:10:55.502804  434426 out.go:204]   - Generating certificates and keys ...
	I0813 21:10:52.813241  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:54.814328  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:56.527151  434426 out.go:204]   - Booting up control plane ...
	I0813 21:10:54.088351  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:56.582557  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:00.186147  434036 out.go:204]   - Configuring RBAC rules ...
	I0813 21:11:00.630583  434036 cni.go:93] Creating CNI manager for ""
	I0813 21:11:00.630631  434036 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:10:57.310913  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:10:59.811517  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:00.632424  434036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:11:00.632506  434036 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:11:00.641928  434036 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:11:00.656175  434036 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:11:00.656236  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:00.656255  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=old-k8s-version-20210813205952-393438 minikube.k8s.io/updated_at=2021_08_13T21_11_00_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:01.059199  434036 ops.go:34] apiserver oom_adj: 16
	I0813 21:11:01.059224  434036 ops.go:39] adjusting apiserver oom_adj to -10
	I0813 21:11:01.059191  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:01.059238  434036 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:11:01.674772  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:10:58.584725  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:01.081620  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:02.311564  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:04.313948  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:02.174386  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:02.674774  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:03.174370  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:03.675153  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:04.174246  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:04.674985  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:05.174503  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:05.675000  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:06.174971  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:06.674929  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:03.582192  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:06.081634  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:06.814844  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:09.310738  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:11.313358  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:07.174993  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:07.675061  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:08.174860  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:08.674596  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:09.175181  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:09.674238  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:10.174232  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:10.674797  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:11.174169  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:11.675156  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:08.580977  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:10.584133  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:12.584516  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:13.172343  434426 out.go:204]   - Configuring RBAC rules ...
	I0813 21:11:13.720996  434426 cni.go:93] Creating CNI manager for ""
	I0813 21:11:13.721025  434426 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:11:12.174219  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:12.674355  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:13.174223  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:13.675187  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:14.175037  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:14.674721  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.174792  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.674194  434036 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.922458  434036 kubeadm.go:985] duration metric: took 15.266270548s to wait for elevateKubeSystemPrivileges.
	I0813 21:11:15.922496  434036 kubeadm.go:392] StartCluster complete in 6m13.317137577s
	I0813 21:11:15.922521  434036 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:15.922651  434036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:11:15.924691  434036 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:16.472904  434036 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210813205952-393438" rescaled to 1
	I0813 21:11:16.473031  434036 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.83.180 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0813 21:11:16.473054  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:11:16.474575  434036 out.go:177] * Verifying Kubernetes components...
	I0813 21:11:16.473151  434036 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:11:16.474654  434036 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:16.474725  434036 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210813205952-393438"
	I0813 21:11:16.474749  434036 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210813205952-393438"
	W0813 21:11:16.474757  434036 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:11:16.474757  434036 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210813205952-393438"
	I0813 21:11:16.474778  434036 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210813205952-393438"
	W0813 21:11:16.474787  434036 addons.go:147] addon dashboard should already be in state true
	I0813 21:11:16.474791  434036 host.go:66] Checking if "old-k8s-version-20210813205952-393438" exists ...
	I0813 21:11:16.474823  434036 host.go:66] Checking if "old-k8s-version-20210813205952-393438" exists ...
	I0813 21:11:16.473340  434036 config.go:177] Loaded profile config "old-k8s-version-20210813205952-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 21:11:16.475008  434036 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210813205952-393438"
	I0813 21:11:16.475041  434036 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210813205952-393438"
	W0813 21:11:16.475053  434036 addons.go:147] addon metrics-server should already be in state true
	I0813 21:11:16.475079  434036 host.go:66] Checking if "old-k8s-version-20210813205952-393438" exists ...
	I0813 21:11:16.475401  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.475459  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.475505  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.475545  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.475769  434036 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210813205952-393438"
	I0813 21:11:16.475844  434036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210813205952-393438"
	I0813 21:11:16.476015  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.476055  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.476272  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.476315  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.497442  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0813 21:11:16.497642  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43165
	I0813 21:11:16.497988  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.498021  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0813 21:11:16.498283  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.498412  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.498559  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0813 21:11:16.498598  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.498613  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.498761  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.498779  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.498860  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.498920  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.498942  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.498995  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.499297  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.499341  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.499431  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.499455  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.499592  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.499635  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.499788  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.499949  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetState
	I0813 21:11:16.499981  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.500022  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.500027  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.500054  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.514821  434036 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210813205952-393438"
	W0813 21:11:16.514848  434036 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:11:16.514875  434036 host.go:66] Checking if "old-k8s-version-20210813205952-393438" exists ...
	I0813 21:11:16.515287  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.515325  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.515556  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0813 21:11:16.515581  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42801
	I0813 21:11:16.515557  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43087
	I0813 21:11:16.516056  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.516149  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.516217  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.516575  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.516594  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.516709  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.516728  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.517055  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.517058  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.517332  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetState
	I0813 21:11:16.517382  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetState
	I0813 21:11:16.517392  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.517409  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.517811  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.518005  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetState
	I0813 21:11:16.523868  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .DriverName
	I0813 21:11:16.524072  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .DriverName
	I0813 21:11:16.525915  434036 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:11:16.527474  434036 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:11:16.524496  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .DriverName
	I0813 21:11:16.527547  434036 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:11:16.527559  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:11:16.527584  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHHostname
	I0813 21:11:16.529027  434036 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:11:16.529080  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:11:16.529091  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:11:16.529110  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHHostname
	I0813 21:11:13.813030  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:16.311896  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:16.530900  434036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:11:16.529862  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33769
	I0813 21:11:16.531012  434036 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:11:16.531026  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:11:16.531044  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHHostname
	I0813 21:11:16.531462  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.532183  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.532200  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.532644  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.533900  434036 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:16.533944  434036 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:16.537401  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.537962  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.538641  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:3b:5e", ip: ""} in network mk-old-k8s-version-20210813205952-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:04:38 +0000 UTC Type:0 Mac:52:54:00:55:3b:5e Iaid: IPaddr:192.168.83.180 Prefix:24 Hostname:old-k8s-version-20210813205952-393438 Clientid:01:52:54:00:55:3b:5e}
	I0813 21:11:16.538693  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined IP address 192.168.83.180 and MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.538805  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:3b:5e", ip: ""} in network mk-old-k8s-version-20210813205952-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:04:38 +0000 UTC Type:0 Mac:52:54:00:55:3b:5e Iaid: IPaddr:192.168.83.180 Prefix:24 Hostname:old-k8s-version-20210813205952-393438 Clientid:01:52:54:00:55:3b:5e}
	I0813 21:11:16.538842  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined IP address 192.168.83.180 and MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.538873  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHPort
	I0813 21:11:16.539033  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHKeyPath
	I0813 21:11:16.539211  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHUsername
	I0813 21:11:16.539315  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHPort
	I0813 21:11:16.539359  434036 sshutil.go:53] new ssh client: &{IP:192.168.83.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205952-393438/id_rsa Username:docker}
	I0813 21:11:16.539691  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHKeyPath
	I0813 21:11:16.539853  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHUsername
	I0813 21:11:16.539998  434036 sshutil.go:53] new ssh client: &{IP:192.168.83.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205952-393438/id_rsa Username:docker}
	I0813 21:11:16.540454  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.540864  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:3b:5e", ip: ""} in network mk-old-k8s-version-20210813205952-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:04:38 +0000 UTC Type:0 Mac:52:54:00:55:3b:5e Iaid: IPaddr:192.168.83.180 Prefix:24 Hostname:old-k8s-version-20210813205952-393438 Clientid:01:52:54:00:55:3b:5e}
	I0813 21:11:16.540899  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined IP address 192.168.83.180 and MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.541064  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHPort
	I0813 21:11:16.541224  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHKeyPath
	I0813 21:11:16.541450  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHUsername
	I0813 21:11:16.541603  434036 sshutil.go:53] new ssh client: &{IP:192.168.83.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205952-393438/id_rsa Username:docker}
	I0813 21:11:16.547557  434036 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37251
	I0813 21:11:16.547929  434036 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:16.548399  434036 main.go:130] libmachine: Using API Version  1
	I0813 21:11:16.548424  434036 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:16.548756  434036 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:16.548937  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetState
	I0813 21:11:16.551474  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .DriverName
	I0813 21:11:16.551673  434036 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:11:16.551689  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:11:16.551707  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHHostname
	I0813 21:11:16.556960  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.557325  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:3b:5e", ip: ""} in network mk-old-k8s-version-20210813205952-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:04:38 +0000 UTC Type:0 Mac:52:54:00:55:3b:5e Iaid: IPaddr:192.168.83.180 Prefix:24 Hostname:old-k8s-version-20210813205952-393438 Clientid:01:52:54:00:55:3b:5e}
	I0813 21:11:16.557358  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined IP address 192.168.83.180 and MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:11:16.557473  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHPort
	I0813 21:11:16.557621  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHKeyPath
	I0813 21:11:16.557777  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHUsername
	I0813 21:11:16.557905  434036 sshutil.go:53] new ssh client: &{IP:192.168.83.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205952-393438/id_rsa Username:docker}
	I0813 21:11:16.891070  434036 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:11:16.891095  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:11:16.921005  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:11:16.921029  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:11:16.958755  434036 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210813205952-393438" to be "Ready" ...
	I0813 21:11:16.958833  434036 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:11:16.961190  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:11:16.961221  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:11:16.966347  434036 node_ready.go:49] node "old-k8s-version-20210813205952-393438" has status "Ready":"True"
	I0813 21:11:16.966365  434036 node_ready.go:38] duration metric: took 7.580764ms waiting for node "old-k8s-version-20210813205952-393438" to be "Ready" ...
	I0813 21:11:16.966379  434036 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:11:16.972401  434036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:16.995238  434036 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:11:17.006124  434036 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:11:17.025333  434036 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:11:17.025360  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:11:17.027216  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:11:17.027236  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:11:13.722715  434426 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:11:13.722800  434426 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:11:13.734030  434426 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:11:13.750877  434426 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:11:13.750976  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:13.750976  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=no-preload-20210813210044-393438 minikube.k8s.io/updated_at=2021_08_13T21_11_13_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:13.806850  434426 ops.go:34] apiserver oom_adj: -16
	I0813 21:11:14.073328  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:14.667824  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.168149  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.667995  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:16.167441  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:16.667309  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:15.082786  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:17.586523  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:17.111367  434036 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:11:17.111398  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:11:17.118314  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:11:17.118335  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:11:17.149133  434036 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:11:17.162658  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:11:17.162693  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:11:17.234536  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:11:17.234569  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:11:17.295123  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:11:17.295156  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:11:17.482125  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:11:17.482162  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:11:17.677419  434036 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:11:17.677447  434036 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:11:17.757023  434036 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:11:17.960379  434036 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.001504607s)
	I0813 21:11:17.960480  434036 start.go:728] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS
	I0813 21:11:18.539601  434036 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.544332297s)
	I0813 21:11:18.539651  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.539654  434036 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.533491345s)
	I0813 21:11:18.539692  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.539666  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.539715  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.539988  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.540005  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.540013  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.540022  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.540110  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | Closing plugin on server side
	I0813 21:11:18.540198  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.540213  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.540222  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.540238  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.540266  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.540277  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | Closing plugin on server side
	I0813 21:11:18.540278  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.540299  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.540242  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.540531  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | Closing plugin on server side
	I0813 21:11:18.540593  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | Closing plugin on server side
	I0813 21:11:18.540647  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.540664  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.540692  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.540703  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.891875  434036 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742689541s)
	I0813 21:11:18.891930  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.891957  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.892355  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.892377  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.892389  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:18.892399  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:18.892625  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:18.892635  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.892645  434036 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210813205952-393438"
	I0813 21:11:19.031254  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:19.828027  434036 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.070944239s)
	I0813 21:11:19.828086  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:19.828101  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:19.828430  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:19.828452  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | Closing plugin on server side
	I0813 21:11:19.828452  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:19.828488  434036 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:19.828501  434036 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .Close
	I0813 21:11:19.828750  434036 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:19.828768  434036 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:18.311970  434236 pod_ready.go:102] pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:18.803484  434236 pod_ready.go:81] duration metric: took 4m0.007742732s waiting for pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace to be "Ready" ...
	E0813 21:11:18.803527  434236 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-x428n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:11:18.803553  434236 pod_ready.go:38] duration metric: took 4m7.574137981s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:11:18.803589  434236 kubeadm.go:604] restartCluster took 5m50.491873522s
	W0813 21:11:18.803752  434236 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:11:18.803790  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 21:11:19.830658  434036 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 21:11:19.830710  434036 addons.go:344] enableAddons completed in 3.357568207s
	I0813 21:11:21.074595  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:17.167975  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:17.668129  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:18.167259  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:18.667894  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:19.167326  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:19.667247  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:20.167336  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:20.667616  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:21.167732  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:21.667655  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:19.587857  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:22.083079  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:22.321035  434236 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.517214272s)
	I0813 21:11:22.321114  434236 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:11:22.336500  434236 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:11:22.336600  434236 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:11:22.381833  434236 cri.go:76] found id: ""
	I0813 21:11:22.381921  434236 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:11:22.390007  434236 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:11:22.402520  434236 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:11:22.402571  434236 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:11:22.966621  434236 out.go:204]   - Generating certificates and keys ...
	I0813 21:11:23.985069  434236 out.go:204]   - Booting up control plane ...
	I0813 21:11:23.486070  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:25.488157  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:22.167466  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:22.668165  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:23.167304  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:23.667348  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:24.167472  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:24.667538  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:25.167599  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:25.667365  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:26.167194  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:26.667310  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:27.167160  434426 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:27.282634  434426 kubeadm.go:985] duration metric: took 13.531711919s to wait for elevateKubeSystemPrivileges.
	I0813 21:11:27.282691  434426 kubeadm.go:392] StartCluster complete in 5m47.489271406s
	I0813 21:11:27.282716  434426 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:27.282848  434426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:11:27.284813  434426 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:27.814838  434426 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210813210044-393438" rescaled to 1
	I0813 21:11:27.814916  434426 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.61.54 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:11:27.814960  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:11:24.580604  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:26.581927  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:27.816918  434426 out.go:177] * Verifying Kubernetes components...
	I0813 21:11:27.816991  434426 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:27.815020  434426 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:11:27.817075  434426 addons.go:59] Setting dashboard=true in profile "no-preload-20210813210044-393438"
	I0813 21:11:27.817089  434426 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210813210044-393438"
	I0813 21:11:27.817094  434426 addons.go:135] Setting addon dashboard=true in "no-preload-20210813210044-393438"
	I0813 21:11:27.815219  434426 config.go:177] Loaded profile config "no-preload-20210813210044-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:11:27.817105  434426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210813210044-393438"
	I0813 21:11:27.817111  434426 addons.go:59] Setting metrics-server=true in profile "no-preload-20210813210044-393438"
	I0813 21:11:27.817138  434426 addons.go:135] Setting addon metrics-server=true in "no-preload-20210813210044-393438"
	I0813 21:11:27.817076  434426 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210813210044-393438"
	W0813 21:11:27.817150  434426 addons.go:147] addon metrics-server should already be in state true
	I0813 21:11:27.817165  434426 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210813210044-393438"
	W0813 21:11:27.817177  434426 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:11:27.817202  434426 host.go:66] Checking if "no-preload-20210813210044-393438" exists ...
	I0813 21:11:27.817218  434426 host.go:66] Checking if "no-preload-20210813210044-393438" exists ...
	W0813 21:11:27.817102  434426 addons.go:147] addon dashboard should already be in state true
	I0813 21:11:27.817286  434426 host.go:66] Checking if "no-preload-20210813210044-393438" exists ...
	I0813 21:11:27.817568  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.817609  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.817638  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.817667  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.817735  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.817770  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.817785  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.817803  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.829240  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0813 21:11:27.829663  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.830228  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.830249  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.830834  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.831042  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:11:27.833857  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36095
	I0813 21:11:27.834306  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.834848  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.834868  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.835441  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.835990  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.836027  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.836766  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34007
	I0813 21:11:27.837138  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.837429  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0813 21:11:27.837624  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.837643  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.837784  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.837987  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.838257  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.838272  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.838627  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.838727  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.838777  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.839268  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.839313  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.854776  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0813 21:11:27.854789  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0813 21:11:27.854873  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0813 21:11:27.855191  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.855405  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.855681  434426 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210813210044-393438"
	W0813 21:11:27.855703  434426 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:11:27.855732  434426 host.go:66] Checking if "no-preload-20210813210044-393438" exists ...
	I0813 21:11:27.855782  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.855807  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.855876  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.855892  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.856153  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.856175  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.856191  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.856214  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.856359  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:11:27.856382  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:11:27.857051  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.857489  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.857514  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.857874  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.858051  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:11:27.861869  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:11:27.862110  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:11:27.863672  434426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:11:27.865163  434426 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:11:27.865222  434426 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:11:27.863060  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:11:27.865237  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:11:27.865259  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:11:27.863777  434426 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:11:27.865299  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:11:27.865322  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:11:27.867484  434426 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:11:27.987850  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:30.491230  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:27.869002  434426 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:11:27.869069  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:11:27.869084  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:11:27.869103  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:11:27.869712  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0813 21:11:27.870137  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.870637  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.870661  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.871139  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.871751  434426 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:27.871808  434426 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:27.872140  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.873481  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:11:27.873515  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.873685  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:11:27.873901  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:11:27.874088  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:11:27.874251  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:11:27.874538  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.875107  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:11:27.875132  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.875322  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:11:27.875465  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:11:27.875608  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:11:27.875702  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:11:27.877188  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.877589  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:11:27.877620  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.877775  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:11:27.877959  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:11:27.878119  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:11:27.878270  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:11:27.883439  434426 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0813 21:11:27.883840  434426 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:27.884297  434426 main.go:130] libmachine: Using API Version  1
	I0813 21:11:27.884322  434426 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:27.884659  434426 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:27.884850  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetState
	I0813 21:11:27.887864  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .DriverName
	I0813 21:11:27.888070  434426 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:11:27.888087  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:11:27.888105  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHHostname
	I0813 21:11:27.893121  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.893492  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:61:bf", ip: ""} in network mk-no-preload-20210813210044-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:05:18 +0000 UTC Type:0 Mac:52:54:00:e4:61:bf Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:no-preload-20210813210044-393438 Clientid:01:52:54:00:e4:61:bf}
	I0813 21:11:27.893516  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | domain no-preload-20210813210044-393438 has defined IP address 192.168.61.54 and MAC address 52:54:00:e4:61:bf in network mk-no-preload-20210813210044-393438
	I0813 21:11:27.893656  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHPort
	I0813 21:11:27.893808  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHKeyPath
	I0813 21:11:27.893979  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .GetSSHUsername
	I0813 21:11:27.894145  434426 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/no-preload-20210813210044-393438/id_rsa Username:docker}
	I0813 21:11:28.140814  434426 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:11:28.140836  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:11:28.308462  434426 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:11:28.308490  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:11:28.312755  434426 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:11:28.316440  434426 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210813210044-393438" to be "Ready" ...
	I0813 21:11:28.316649  434426 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:11:28.320165  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:11:28.320188  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:11:28.323008  434426 node_ready.go:49] node "no-preload-20210813210044-393438" has status "Ready":"True"
	I0813 21:11:28.323025  434426 node_ready.go:38] duration metric: took 6.554015ms waiting for node "no-preload-20210813210044-393438" to be "Ready" ...
	I0813 21:11:28.323037  434426 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:11:28.335561  434426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:28.384581  434426 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:11:28.455637  434426 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:11:28.455676  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:11:28.465004  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:11:28.465031  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:11:28.656110  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:11:28.656140  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:11:28.701795  434426 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:11:28.916338  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:11:28.916368  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:11:29.075737  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:11:29.075769  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:11:29.146984  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:11:29.147013  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:11:29.358157  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:11:29.358203  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:11:29.917909  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:11:29.917935  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:11:29.986046  434426 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:11:29.986075  434426 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:11:30.122849  434426 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:11:30.364045  434426 pod_ready.go:102] pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:30.578023  434426 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.261329708s)
	I0813 21:11:30.578056  434426 start.go:728] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS
	I0813 21:11:30.578249  434426 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.265452722s)
	I0813 21:11:30.578304  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:30.578324  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:30.578634  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:30.578652  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:30.578680  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:30.578694  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:30.580051  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Closing plugin on server side
	I0813 21:11:30.580102  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:30.580128  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:30.580153  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:30.580167  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:30.580440  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:30.580459  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:30.746411  434426 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.361788474s)
	I0813 21:11:30.746464  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:30.746478  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:30.746827  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Closing plugin on server side
	I0813 21:11:30.746890  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:30.746915  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:30.746939  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:30.746955  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:30.747238  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Closing plugin on server side
	I0813 21:11:30.747278  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:30.747288  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:31.414876  434426 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.713031132s)
	I0813 21:11:31.414948  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:31.414972  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:31.415309  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Closing plugin on server side
	I0813 21:11:31.415443  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:31.415475  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:31.415495  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:31.415516  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:31.416949  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) DBG | Closing plugin on server side
	I0813 21:11:31.416967  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:31.416984  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:31.416997  434426 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210813210044-393438"
	I0813 21:11:32.305740  434426 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.182835966s)
	I0813 21:11:32.305798  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:32.305817  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:32.306117  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:32.306138  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:32.306150  434426 main.go:130] libmachine: Making call to close driver server
	I0813 21:11:32.306161  434426 main.go:130] libmachine: (no-preload-20210813210044-393438) Calling .Close
	I0813 21:11:32.307516  434426 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:11:32.307583  434426 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:11:29.082122  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:31.084872  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:32.988501  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:35.489902  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:32.309263  434426 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 21:11:32.309287  434426 addons.go:344] enableAddons completed in 4.494276897s
	I0813 21:11:32.858825  434426 pod_ready.go:102] pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:34.860075  434426 pod_ready.go:102] pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:36.860345  434426 pod_ready.go:102] pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:33.584913  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:35.593941  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:40.171000  434236 out.go:204]   - Configuring RBAC rules ...
	I0813 21:11:40.713714  434236 cni.go:93] Creating CNI manager for ""
	I0813 21:11:40.713746  434236 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:11:40.715369  434236 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:11:40.715459  434236 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:11:40.728777  434236 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:11:40.756822  434236 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:11:40.756935  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=default-k8s-different-port-20210813210121-393438 minikube.k8s.io/updated_at=2021_08_13T21_11_40_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:40.756935  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:41.175747  434236 ops.go:34] apiserver oom_adj: -16
	I0813 21:11:41.176252  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:37.986288  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:39.987469  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:38.865021  434426 pod_ready.go:102] pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:40.854753  434426 pod_ready.go:97] error getting pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-2kv7b" not found
	I0813 21:11:40.854790  434426 pod_ready.go:81] duration metric: took 12.519201094s waiting for pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace to be "Ready" ...
	E0813 21:11:40.854805  434426 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-2kv7b" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-2kv7b" not found
	I0813 21:11:40.854816  434426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-r4dmk" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.864186  434426 pod_ready.go:92] pod "coredns-78fcd69978-r4dmk" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:40.864202  434426 pod_ready.go:81] duration metric: took 9.379202ms waiting for pod "coredns-78fcd69978-r4dmk" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.864211  434426 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.871022  434426 pod_ready.go:92] pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:40.871041  434426 pod_ready.go:81] duration metric: took 6.824229ms waiting for pod "etcd-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.871051  434426 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.878077  434426 pod_ready.go:92] pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:40.878097  434426 pod_ready.go:81] duration metric: took 7.039745ms waiting for pod "kube-apiserver-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.878109  434426 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.896064  434426 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:40.896083  434426 pod_ready.go:81] duration metric: took 17.966303ms waiting for pod "kube-controller-manager-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:40.896092  434426 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2k9qh" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:41.058881  434426 pod_ready.go:92] pod "kube-proxy-2k9qh" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:41.058909  434426 pod_ready.go:81] duration metric: took 162.808554ms waiting for pod "kube-proxy-2k9qh" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:41.058923  434426 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:41.456678  434426 pod_ready.go:92] pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:41.456708  434426 pod_ready.go:81] duration metric: took 397.772439ms waiting for pod "kube-scheduler-no-preload-20210813210044-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:41.456720  434426 pod_ready.go:38] duration metric: took 13.13366456s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:11:41.456741  434426 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:11:41.456792  434426 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:11:41.473663  434426 api_server.go:70] duration metric: took 13.658686712s to wait for apiserver process to appear ...
	I0813 21:11:41.473687  434426 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:11:41.473700  434426 api_server.go:239] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0813 21:11:41.481067  434426 api_server.go:265] https://192.168.61.54:8443/healthz returned 200:
	ok
	I0813 21:11:41.482489  434426 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:11:41.482508  434426 api_server.go:129] duration metric: took 8.812243ms to wait for apiserver health ...
	I0813 21:11:41.482518  434426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:11:41.661255  434426 system_pods.go:59] 8 kube-system pods found
	I0813 21:11:41.661293  434426 system_pods.go:61] "coredns-78fcd69978-r4dmk" [0549f087-6804-403a-91ac-46ea3176692a] Running
	I0813 21:11:41.661302  434426 system_pods.go:61] "etcd-no-preload-20210813210044-393438" [ae4561cd-c25c-4ec9-952c-ee3f2bb9da33] Running
	I0813 21:11:41.661309  434426 system_pods.go:61] "kube-apiserver-no-preload-20210813210044-393438" [6634f014-b661-496f-b26e-8883011d941d] Running
	I0813 21:11:41.661316  434426 system_pods.go:61] "kube-controller-manager-no-preload-20210813210044-393438" [8ac7be54-2d76-4cc5-98ae-d920758801e3] Running
	I0813 21:11:41.661322  434426 system_pods.go:61] "kube-proxy-2k9qh" [22a31bb3-8b54-429b-9161-471a84001351] Running
	I0813 21:11:41.661329  434426 system_pods.go:61] "kube-scheduler-no-preload-20210813210044-393438" [2da08426-2d5c-4a28-af34-9e233605bc60] Running
	I0813 21:11:41.661342  434426 system_pods.go:61] "metrics-server-7c784ccb57-7z8h9" [5e8a9f2d-6d0e-49b6-a7ce-a5cc9b3ff075] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:11:41.661351  434426 system_pods.go:61] "storage-provisioner" [7f18b572-6c04-49c7-96fb-5a2371bb3c87] Running
	I0813 21:11:41.661362  434426 system_pods.go:74] duration metric: took 178.836213ms to wait for pod list to return data ...
	I0813 21:11:41.661382  434426 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:11:41.856737  434426 default_sa.go:45] found service account: "default"
	I0813 21:11:41.856816  434426 default_sa.go:55] duration metric: took 195.424882ms for default service account to be created ...
	I0813 21:11:41.856845  434426 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:11:42.058866  434426 system_pods.go:86] 8 kube-system pods found
	I0813 21:11:42.058901  434426 system_pods.go:89] "coredns-78fcd69978-r4dmk" [0549f087-6804-403a-91ac-46ea3176692a] Running
	I0813 21:11:42.058908  434426 system_pods.go:89] "etcd-no-preload-20210813210044-393438" [ae4561cd-c25c-4ec9-952c-ee3f2bb9da33] Running
	I0813 21:11:42.058914  434426 system_pods.go:89] "kube-apiserver-no-preload-20210813210044-393438" [6634f014-b661-496f-b26e-8883011d941d] Running
	I0813 21:11:42.058919  434426 system_pods.go:89] "kube-controller-manager-no-preload-20210813210044-393438" [8ac7be54-2d76-4cc5-98ae-d920758801e3] Running
	I0813 21:11:42.058923  434426 system_pods.go:89] "kube-proxy-2k9qh" [22a31bb3-8b54-429b-9161-471a84001351] Running
	I0813 21:11:42.058927  434426 system_pods.go:89] "kube-scheduler-no-preload-20210813210044-393438" [2da08426-2d5c-4a28-af34-9e233605bc60] Running
	I0813 21:11:42.058935  434426 system_pods.go:89] "metrics-server-7c784ccb57-7z8h9" [5e8a9f2d-6d0e-49b6-a7ce-a5cc9b3ff075] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:11:42.058940  434426 system_pods.go:89] "storage-provisioner" [7f18b572-6c04-49c7-96fb-5a2371bb3c87] Running
	I0813 21:11:42.058948  434426 system_pods.go:126] duration metric: took 202.083479ms to wait for k8s-apps to be running ...
	I0813 21:11:42.058960  434426 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:11:42.059008  434426 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:42.071584  434426 system_svc.go:56] duration metric: took 12.61257ms WaitForService to wait for kubelet.
	I0813 21:11:42.071614  434426 kubeadm.go:547] duration metric: took 14.256642896s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:11:42.071643  434426 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:11:42.255842  434426 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:11:42.255875  434426 node_conditions.go:123] node cpu capacity is 2
	I0813 21:11:42.255891  434426 node_conditions.go:105] duration metric: took 184.242906ms to run NodePressure ...
	I0813 21:11:42.255902  434426 start.go:231] waiting for startup goroutines ...
	I0813 21:11:42.309791  434426 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 21:11:42.311704  434426 out.go:177] 
	W0813 21:11:42.311876  434426 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 21:11:42.313517  434426 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:11:42.315056  434426 out.go:177] * Done! kubectl is now configured to use "no-preload-20210813210044-393438" cluster and "default" namespace by default
	I0813 21:11:38.082394  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:40.584132  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:42.585105  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:41.845488  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:42.344719  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:42.844731  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:43.345378  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:43.845198  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:44.345476  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:44.845084  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:45.345490  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:45.845540  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:46.345470  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:42.487321  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:44.986642  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:46.987014  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:45.082319  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:47.580764  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:46.845406  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:47.345178  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:47.845431  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:48.344795  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:48.845248  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:49.344914  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:49.844893  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:50.345681  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:50.845210  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:51.345589  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:48.994491  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:51.487674  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:49.585484  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:52.081864  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:51.845730  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:52.344956  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:52.845569  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:53.345574  434236 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:11:53.474660  434236 kubeadm.go:985] duration metric: took 12.717768206s to wait for elevateKubeSystemPrivileges.
	I0813 21:11:53.474717  434236 kubeadm.go:392] StartCluster complete in 6m25.212590888s
	I0813 21:11:53.474741  434236 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:53.474888  434236 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:11:53.476656  434236 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:11:54.001588  434236 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210813210121-393438" rescaled to 1
	I0813 21:11:54.001644  434236 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.163 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:11:54.003211  434236 out.go:177] * Verifying Kubernetes components...
	I0813 21:11:54.003275  434236 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:11:54.001714  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:11:54.001736  434236 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:11:54.001949  434236 config.go:177] Loaded profile config "default-k8s-different-port-20210813210121-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:11:54.003383  434236 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210813210121-393438"
	I0813 21:11:54.003390  434236 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210813210121-393438"
	I0813 21:11:54.003391  434236 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210813210121-393438"
	I0813 21:11:54.003399  434236 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210813210121-393438"
	I0813 21:11:54.003419  434236 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210813210121-393438"
	W0813 21:11:54.003431  434236 addons.go:147] addon dashboard should already be in state true
	I0813 21:11:54.003449  434236 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210813210121-393438"
	I0813 21:11:54.003462  434236 host.go:66] Checking if "default-k8s-different-port-20210813210121-393438" exists ...
	I0813 21:11:54.003403  434236 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210813210121-393438"
	W0813 21:11:54.003498  434236 addons.go:147] addon metrics-server should already be in state true
	I0813 21:11:54.003403  434236 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210813210121-393438"
	W0813 21:11:54.003556  434236 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:11:54.003588  434236 host.go:66] Checking if "default-k8s-different-port-20210813210121-393438" exists ...
	I0813 21:11:54.003526  434236 host.go:66] Checking if "default-k8s-different-port-20210813210121-393438" exists ...
	I0813 21:11:54.003908  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.003921  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.003951  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.003958  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.003998  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.004034  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.004150  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.004173  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.016624  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0813 21:11:54.016972  434236 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210813210121-393438" to be "Ready" ...
	I0813 21:11:54.017214  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45499
	I0813 21:11:54.017324  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.017555  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.018129  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.018157  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.018281  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.018306  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.018534  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.018603  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.019128  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.019145  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.019180  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.019215  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.024026  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46653
	I0813 21:11:54.024370  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.024930  434236 node_ready.go:49] node "default-k8s-different-port-20210813210121-393438" has status "Ready":"True"
	I0813 21:11:54.024945  434236 node_ready.go:38] duration metric: took 7.95601ms waiting for node "default-k8s-different-port-20210813210121-393438" to be "Ready" ...
	I0813 21:11:54.024955  434236 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:11:54.025186  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.025200  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.025511  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.025673  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetState
	I0813 21:11:54.032739  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42125
	I0813 21:11:54.033219  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.033797  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42675
	I0813 21:11:54.033970  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36905
	I0813 21:11:54.034136  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.034155  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.034289  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.035331  434236 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:54.035994  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.036306  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.036564  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.036582  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetState
	I0813 21:11:54.036588  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.036654  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.036674  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.036955  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.037018  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.037163  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetState
	I0813 21:11:54.037523  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.037557  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.038431  434236 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210813210121-393438"
	W0813 21:11:54.038457  434236 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:11:54.038486  434236 host.go:66] Checking if "default-k8s-different-port-20210813210121-393438" exists ...
	I0813 21:11:54.038974  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.039017  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.041363  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:11:54.041425  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:11:54.043687  434236 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:11:54.045078  434236 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:11:54.043789  434236 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:11:54.046493  434236 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:11:54.045176  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:11:54.046598  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:11:54.047176  434236 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:11:54.047197  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:11:54.047217  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:11:54.054752  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.055203  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:11:54.055278  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.055365  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.055555  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:11:54.055868  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:11:54.055896  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.055897  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:11:54.056065  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:11:54.056123  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:11:54.056175  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:11:54.056259  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:11:54.056403  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:11:54.056541  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:11:54.059420  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34781
	I0813 21:11:54.059442  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33309
	I0813 21:11:54.059892  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.059898  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.060344  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.060373  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.060466  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.060485  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.060711  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.060827  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.061004  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetState
	I0813 21:11:54.061201  434236 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:11:54.061241  434236 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:11:54.064135  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:11:54.066034  434236 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:11:54.066096  434236 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:11:54.066109  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:11:54.066128  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:11:54.071757  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.072174  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:11:54.072205  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.072376  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:11:54.072586  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:11:54.072757  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:11:54.072905  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:11:54.073475  434236 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0813 21:11:54.074138  434236 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:11:54.074589  434236 main.go:130] libmachine: Using API Version  1
	I0813 21:11:54.074614  434236 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:11:54.074996  434236 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:11:54.075187  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetState
	I0813 21:11:54.078408  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:11:54.078608  434236 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:11:54.078623  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:11:54.078641  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:11:54.084351  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.084794  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:11:54.084831  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:11:54.084993  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:11:54.085141  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:11:54.085298  434236 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:11:54.085409  434236 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:11:54.388135  434236 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:11:54.514682  434236 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:11:54.533557  434236 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:11:54.533584  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:11:54.568881  434236 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:11:54.568903  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:11:54.686526  434236 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:11:54.686556  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:11:54.733535  434236 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:11:54.733558  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:11:54.735648  434236 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:11:54.794048  434236 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:11:54.794081  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:11:54.822039  434236 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:11:54.822063  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:11:54.956980  434236 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:11:55.078656  434236 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:11:55.078693  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:11:55.403590  434236 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:11:55.403621  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:11:55.938172  434236 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:11:55.938205  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:11:56.056177  434236 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:11:56.056200  434236 pod_ready.go:81] duration metric: took 2.02084845s waiting for pod "etcd-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:56.056214  434236 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210813210121-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:11:56.394625  434236 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:11:56.394662  434236 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:11:53.488727  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:55.989499  434036 pod_ready.go:102] pod "coredns-fb8b8dccf-vlm5d" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:54.082227  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:11:56.085579  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	c802f7156732c       523cad1a4df73       17 seconds ago      Exited              dashboard-metrics-scraper   1                   c9a387bc1fe10
	7a5c8d13e38e3       9a07b5b4bfac0       24 seconds ago      Running             kubernetes-dashboard        0                   81a88eed574b3
	d190ad9281e27       6e38f40d628db       26 seconds ago      Running             storage-provisioner         0                   a51e1b05ddab9
	d91696ad46445       8d147537fb7d1       29 seconds ago      Running             coredns                     0                   3f4a9fcf554b7
	9e3a151de9b04       ea6b13ed84e03       31 seconds ago      Running             kube-proxy                  0                   dd71ffcab16c2
	cf1afa08fe13b       cf9cba6c3e4a8       55 seconds ago      Running             kube-controller-manager     2                   d5e3ceb90e013
	aa4d0f5069490       0048118155842       55 seconds ago      Running             etcd                        2                   2cd725a5ec9f8
	0b6d52d93d8b3       7da2efaa5b480       55 seconds ago      Running             kube-scheduler              2                   9ac1643b6bb6b
	d237a3c155160       b2462aa94d403       55 seconds ago      Running             kube-apiserver              2                   df6cafa1ea4bc
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 21:05:17 UTC, end at Fri 2021-08-13 21:11:59 UTC. --
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.328854670Z" level=info msg="CreateContainer within sandbox \"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.380183215Z" level=info msg="TearDown network for sandbox \"d0ad3ec4867f4f8b9a01c1ad0b60a52f041212eef802ffe403122558d774b29b\" successfully"
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.380450757Z" level=info msg="StopPodSandbox for \"d0ad3ec4867f4f8b9a01c1ad0b60a52f041212eef802ffe403122558d774b29b\" returns successfully"
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.410477125Z" level=info msg="CreateContainer within sandbox \"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\""
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.412481988Z" level=info msg="StartContainer for \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\""
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.851938459Z" level=info msg="StartContainer for \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\" returns successfully"
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.895651192Z" level=info msg="Finish piping stderr of container \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\""
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.895918542Z" level=info msg="Finish piping stdout of container \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\""
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.900425453Z" level=info msg="TaskExit event &TaskExit{ContainerID:12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e,ID:12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e,Pid:6281,ExitStatus:1,ExitedAt:2021-08-13 21:11:41.899550485 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.961287594Z" level=info msg="shim disconnected" id=12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e
	Aug 13 21:11:41 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:41.961432168Z" level=error msg="copy shim log" error="read /proc/self/fd/83: file already closed"
	Aug 13 21:11:42 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:42.540886126Z" level=info msg="CreateContainer within sandbox \"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 13 21:11:42 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:42.581358705Z" level=info msg="CreateContainer within sandbox \"c9a387bc1fe1037566f4f4f598ccaeec26529fa826d4d4d2a87e1b37cc328f14\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64\""
	Aug 13 21:11:42 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:42.583303064Z" level=info msg="StartContainer for \"c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64\""
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.056353118Z" level=info msg="StartContainer for \"c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64\" returns successfully"
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.091640109Z" level=info msg="Finish piping stdout of container \"c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64\""
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.091661389Z" level=info msg="Finish piping stderr of container \"c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64\""
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.093154616Z" level=info msg="TaskExit event &TaskExit{ContainerID:c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64,ID:c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64,Pid:6367,ExitStatus:1,ExitedAt:2021-08-13 21:11:43.092699695 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.159473154Z" level=info msg="shim disconnected" id=c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.159714244Z" level=error msg="copy shim log" error="read /proc/self/fd/85: file already closed"
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.528164542Z" level=info msg="RemoveContainer for \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\""
	Aug 13 21:11:43 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:43.544574531Z" level=info msg="RemoveContainer for \"12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e\" returns successfully"
	Aug 13 21:11:45 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:45.192925403Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:11:45 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:45.198217738Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 13 21:11:45 no-preload-20210813210044-393438 containerd[2056]: time="2021-08-13T21:11:45.200289616Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> coredns [d91696ad46445f4071e3355e2c90ce5e31e1058f8832ea170317062c4ac38ec1] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20210813210044-393438
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20210813210044-393438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=no-preload-20210813210044-393438
	                    minikube.k8s.io/updated_at=2021_08_13T21_11_13_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 21:11:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20210813210044-393438
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 21:11:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 21:11:49 +0000   Fri, 13 Aug 2021 21:11:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 21:11:49 +0000   Fri, 13 Aug 2021 21:11:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 21:11:49 +0000   Fri, 13 Aug 2021 21:11:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 21:11:49 +0000   Fri, 13 Aug 2021 21:11:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.54
	  Hostname:    no-preload-20210813210044-393438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 daf87fe5c2b64cba9f2917b199ed5c40
	  System UUID:                daf87fe5-c2b6-4cba-9f29-17b199ed5c40
	  Boot ID:                    33200d1e-37c6-4466-b969-9244a67b04ce
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-r4dmk                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     33s
	  kube-system                 etcd-no-preload-20210813210044-393438                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         40s
	  kube-system                 kube-apiserver-no-preload-20210813210044-393438             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 kube-controller-manager-no-preload-20210813210044-393438    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kube-proxy-2k9qh                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-scheduler-no-preload-20210813210044-393438             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 metrics-server-7c784ccb57-7z8h9                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (14%!)(MISSING)      0 (0%!)(MISSING)         28s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-kbbhs                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-29b2r                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             470Mi (22%!)(MISSING)  170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 41s   kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet  Node no-preload-20210813210044-393438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet  Node no-preload-20210813210044-393438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet  Node no-preload-20210813210044-393438 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s   kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                33s   kubelet  Node no-preload-20210813210044-393438 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +3.628504] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.054709] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.047242] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1731 comm=systemd-network
	[  +0.694583] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.344816] vboxguest: loading out-of-tree module taints kernel.
	[  +0.013428] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.818759] systemd-fstab-generator[2003]: Ignoring "noauto" for root device
	[  +0.134410] systemd-fstab-generator[2016]: Ignoring "noauto" for root device
	[  +0.203210] systemd-fstab-generator[2046]: Ignoring "noauto" for root device
	[ +22.779482] systemd-fstab-generator[2257]: Ignoring "noauto" for root device
	[Aug13 21:06] kauditd_printk_skb: 44 callbacks suppressed
	[ +10.282223] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.927510] kauditd_printk_skb: 44 callbacks suppressed
	[ +30.000159] kauditd_printk_skb: 2 callbacks suppressed
	[Aug13 21:07] NFSD: Unable to end grace period: -110
	[Aug13 21:10] kauditd_printk_skb: 20 callbacks suppressed
	[  +4.700521] systemd-fstab-generator[4509]: Ignoring "noauto" for root device
	[Aug13 21:11] systemd-fstab-generator[4900]: Ignoring "noauto" for root device
	[ +14.430798] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.003016] kauditd_printk_skb: 53 callbacks suppressed
	[  +7.269146] kauditd_printk_skb: 44 callbacks suppressed
	[ +13.365798] systemd-fstab-generator[6417]: Ignoring "noauto" for root device
	[  +0.840699] systemd-fstab-generator[6470]: Ignoring "noauto" for root device
	[  +1.066588] systemd-fstab-generator[6523]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [aa4d0f5069490fe488a17ee86308190a2af4d4745836e324736def6d252cf37e] <==
	* {"level":"info","ts":"2021-08-13T21:11:05.952Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ac82224e2d320a9e","local-member-attributes":"{Name:no-preload-20210813210044-393438 ClientURLs:[https://192.168.61.54:2379]}","request-path":"/0/members/ac82224e2d320a9e/attributes","cluster-id":"f6d71e843b8adcd6","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-13T21:11:05.952Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:11:05.956Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-13T21:11:05.958Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-13T21:11:05.962Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.61.54:2379"}
	{"level":"info","ts":"2021-08-13T21:11:05.966Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:11:05.966Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-13T21:11:05.967Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-13T21:11:05.983Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"f6d71e843b8adcd6","local-member-id":"ac82224e2d320a9e","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:11:05.983Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-13T21:11:05.985Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2021-08-13T21:11:25.988Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.054728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-no-preload-20210813210044-393438\" ","response":"range_response_count:1 size:4387"}
	{"level":"info","ts":"2021-08-13T21:11:25.988Z","caller":"traceutil/trace.go:171","msg":"trace[1670945783] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-no-preload-20210813210044-393438; range_end:; response_count:1; response_revision:356; }","duration":"101.68014ms","start":"2021-08-13T21:11:25.886Z","end":"2021-08-13T21:11:25.988Z","steps":["trace[1670945783] 'range keys from in-memory index tree'  (duration: 100.624608ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:11:25.988Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"224.476081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/cronjob-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:11:25.990Z","caller":"traceutil/trace.go:171","msg":"trace[1805224945] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/cronjob-controller; range_end:; response_count:0; response_revision:356; }","duration":"225.89622ms","start":"2021-08-13T21:11:25.763Z","end":"2021-08-13T21:11:25.989Z","steps":["trace[1805224945] 'range keys from in-memory index tree'  (duration: 224.162884ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:11:25.988Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"199.33199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:11:25.991Z","caller":"traceutil/trace.go:171","msg":"trace[1794721521] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:356; }","duration":"202.261111ms","start":"2021-08-13T21:11:25.789Z","end":"2021-08-13T21:11:25.991Z","steps":["trace[1794721521] 'range keys from in-memory index tree'  (duration: 199.110207ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:11:26.820Z","caller":"traceutil/trace.go:171","msg":"trace[310762091] linearizableReadLoop","detail":"{readStateIndex:403; appliedIndex:402; }","duration":"154.563751ms","start":"2021-08-13T21:11:26.665Z","end":"2021-08-13T21:11:26.820Z","steps":["trace[310762091] 'read index received'  (duration: 140.61392ms)","trace[310762091] 'applied index is now lower than readState.Index'  (duration: 13.939721ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T21:11:26.821Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"155.547047ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:226"}
	{"level":"info","ts":"2021-08-13T21:11:26.823Z","caller":"traceutil/trace.go:171","msg":"trace[41867722] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:391; }","duration":"157.648992ms","start":"2021-08-13T21:11:26.665Z","end":"2021-08-13T21:11:26.822Z","steps":["trace[41867722] 'agreement among raft nodes before linearized reading'  (duration: 155.235689ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:11:26.824Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.894477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" ","response":"range_response_count:1 size:275"}
	{"level":"info","ts":"2021-08-13T21:11:26.824Z","caller":"traceutil/trace.go:171","msg":"trace[172775019] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:391; }","duration":"104.161227ms","start":"2021-08-13T21:11:26.720Z","end":"2021-08-13T21:11:26.824Z","steps":["trace[172775019] 'agreement among raft nodes before linearized reading'  (duration: 103.843927ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:11:26.825Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"159.320795ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" ","response":"range_response_count:1 size:263"}
	{"level":"info","ts":"2021-08-13T21:11:26.828Z","caller":"traceutil/trace.go:171","msg":"trace[1096478687] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:391; }","duration":"162.230159ms","start":"2021-08-13T21:11:26.665Z","end":"2021-08-13T21:11:26.828Z","steps":["trace[1096478687] 'agreement among raft nodes before linearized reading'  (duration: 159.257666ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-13T21:11:26.829Z","caller":"traceutil/trace.go:171","msg":"trace[866144520] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"180.947054ms","start":"2021-08-13T21:11:26.648Z","end":"2021-08-13T21:11:26.829Z","steps":["trace[866144520] 'process raft request'  (duration: 157.763042ms)","trace[866144520] 'compare'  (duration: 13.842093ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  21:11:59 up 6 min,  0 users,  load average: 3.02, 1.31, 0.56
	Linux no-preload-20210813210044-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [d237a3c155160c090aef2638fac2ef49826c0e45a1c9fbf33b1a870e8414dc70] <==
	* I0813 21:11:09.834023       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 21:11:09.835042       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 21:11:09.835623       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0813 21:11:09.837287       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 21:11:09.906507       1 controller.go:611] quota admission added evaluator for: namespaces
	I0813 21:11:10.605681       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:11:10.605892       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 21:11:10.631884       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 21:11:10.649830       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 21:11:10.650997       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:11:11.792418       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:11:11.869795       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 21:11:12.001993       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.61.54]
	I0813 21:11:12.004862       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 21:11:12.011794       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 21:11:12.758778       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:11:13.598782       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:11:13.688361       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:11:19.013033       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:11:26.316569       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 21:11:26.406976       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0813 21:11:33.283869       1 handler_proxy.go:104] no RequestInfo found in the context
	E0813 21:11:33.284386       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:11:33.284412       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [cf1afa08fe13ba8a74a4f0b33aa0d925e99e8f094cab2e808c0c2041af0bf075] <==
	* I0813 21:11:31.236495       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0813 21:11:31.451141       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 21:11:31.510992       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:31.551454       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 21:11:31.565458       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:31.609911       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.611047       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:31.611237       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.628036       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:31.650460       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.652169       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.681295       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.682227       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.704434       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.714520       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.722409       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.722715       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.791297       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.791870       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:31.792129       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:31.792179       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:31.883205       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-kbbhs"
	I0813 21:11:31.918673       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-29b2r"
	E0813 21:11:56.414385       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:11:56.887641       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [9e3a151de9b0448ebef9d7891a3dcc0317d781bc2ae3043f21506901a98b8313] <==
	* I0813 21:11:29.689520       1 node.go:172] Successfully retrieved node IP: 192.168.61.54
	I0813 21:11:29.689606       1 server_others.go:140] Detected node IP 192.168.61.54
	W0813 21:11:29.689635       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	W0813 21:11:29.817289       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:11:29.817465       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:11:29.817486       1 server_others.go:212] Using iptables Proxier.
	I0813 21:11:29.817939       1 server.go:649] Version: v1.22.0-rc.0
	I0813 21:11:29.828721       1 config.go:315] Starting service config controller
	I0813 21:11:29.828840       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 21:11:29.828868       1 config.go:224] Starting endpoint slice config controller
	I0813 21:11:29.828873       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0813 21:11:29.921936       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210813210044-393438.169af9ff39543d6c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03dd5e070f95044, ext:973700886, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210813210044-393438", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", N
ame:"no-preload-20210813210044-393438", UID:"no-preload-20210813210044-393438", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210813210044-393438.169af9ff39543d6c" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0813 21:11:29.929926       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 21:11:29.929977       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [0b6d52d93d8b35cac6758a54772d2123443d59acfa35d2cddc00f881f935790f] <==
	* E0813 21:11:09.818571       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:11:09.830591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:11:09.830812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:11:09.831110       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:09.831199       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:11:09.831272       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:11:09.831486       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:11:09.831554       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:10.738132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 21:11:10.758808       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:10.816520       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:11:10.829812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:11:10.886783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:10.889191       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:11:11.001964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:11.036551       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:11:11.075533       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:11:11.088529       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:11:11.099768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:11:11.222275       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:11:11.363918       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:11.383325       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:11:11.394730       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:11:13.440768       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0813 21:11:13.587756       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:05:17 UTC, end at Fri 2021-08-13 21:12:00 UTC. --
	Aug 13 21:11:40 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:40.585772    4909 scope.go:110] "RemoveContainer" containerID="a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a"
	Aug 13 21:11:40 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:40.590146    4909 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found" containerID="a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a"
	Aug 13 21:11:40 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:40.590192    4909 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a} err="failed to get container status \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found"
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:41.192893    4909 remote_runtime.go:276] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found" containerID="a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a"
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:41.193235    4909 kuberuntime_container.go:719] "Container termination failed with gracePeriod" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found" pod="kube-system/coredns-78fcd69978-2kv7b" podUID=ed8c9eb2-76b5-470d-8c30-a80df1e22f27 containerName="coredns" containerID="containerd://a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a" gracePeriod=1
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:41.193280    4909 kuberuntime_container.go:744] "Kill container failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found" pod="kube-system/coredns-78fcd69978-2kv7b" podUID=ed8c9eb2-76b5-470d-8c30-a80df1e22f27 containerName="coredns" containerID={Type:containerd ID:a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a}
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:41.196792    4909 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ed8c9eb2-76b5-470d-8c30-a80df1e22f27 path="/var/lib/kubelet/pods/ed8c9eb2-76b5-470d-8c30-a80df1e22f27/volumes"
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:41.381260    4909 kubelet.go:1767] failed to "KillContainer" for "coredns" with KillContainerError: "rpc error: code = NotFound desc = an error occurred when try to find container \"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\": not found"
	Aug 13 21:11:41 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:41.381433    4909 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"coredns\" with KillContainerError: \"rpc error: code = NotFound desc = an error occurred when try to find container \\\"a706169cf9d9ea3dc89ffbf04130479544b492190c8947d8eb43d9c415b9982a\\\": not found\"" pod="kube-system/coredns-78fcd69978-2kv7b" podUID=ed8c9eb2-76b5-470d-8c30-a80df1e22f27
	Aug 13 21:11:42 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:42.511752    4909 scope.go:110] "RemoveContainer" containerID="12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e"
	Aug 13 21:11:43 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:43.518201    4909 scope.go:110] "RemoveContainer" containerID="12fcad505e0658468120b8112d61560ad42eb6c85d9420c1e5e85d6001a48a6e"
	Aug 13 21:11:43 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:43.518588    4909 scope.go:110] "RemoveContainer" containerID="c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64"
	Aug 13 21:11:43 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:43.519146    4909 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-kbbhs_kubernetes-dashboard(9eaa843a-02b4-4271-b662-874e5c0d8978)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-kbbhs" podUID=9eaa843a-02b4-4271-b662-874e5c0d8978
	Aug 13 21:11:44 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:44.523154    4909 scope.go:110] "RemoveContainer" containerID="c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64"
	Aug 13 21:11:44 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:44.524650    4909 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-kbbhs_kubernetes-dashboard(9eaa843a-02b4-4271-b662-874e5c0d8978)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-kbbhs" podUID=9eaa843a-02b4-4271-b662-874e5c0d8978
	Aug 13 21:11:45 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:45.201796    4909 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:11:45 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:45.202002    4909 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:11:45 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:45.202877    4909 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8x8p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handle
r{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]V
olumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-7z8h9_kube-system(5e8a9f2d-6d0e-49b6-a7ce-a5cc9b3ff075): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:45 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:45.203243    4909 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-7z8h9" podUID=5e8a9f2d-6d0e-49b6-a7ce-a5cc9b3ff075
	Aug 13 21:11:51 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:51.946339    4909 scope.go:110] "RemoveContainer" containerID="c802f7156732ced340c55aea5a3509066c278d3776f87d0971f32fd335f5cb64"
	Aug 13 21:11:51 no-preload-20210813210044-393438 kubelet[4909]: E0813 21:11:51.947042    4909 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-kbbhs_kubernetes-dashboard(9eaa843a-02b4-4271-b662-874e5c0d8978)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-kbbhs" podUID=9eaa843a-02b4-4271-b662-874e5c0d8978
	Aug 13 21:11:53 no-preload-20210813210044-393438 kubelet[4909]: I0813 21:11:53.564732    4909 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 21:11:53 no-preload-20210813210044-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:11:53 no-preload-20210813210044-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:11:53 no-preload-20210813210044-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [7a5c8d13e38e3cf08a72a47f2a60aa9316bf2695b97822b593a0fcf3029f9e83] <==
	* 2021/08/13 21:11:35 Using namespace: kubernetes-dashboard
	2021/08/13 21:11:35 Using in-cluster config to connect to apiserver
	2021/08/13 21:11:35 Using secret token for csrf signing
	2021/08/13 21:11:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:11:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:11:35 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/13 21:11:35 Generating JWE encryption key
	2021/08/13 21:11:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:11:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:11:35 Initializing JWE encryption key from synchronized object
	2021/08/13 21:11:35 Creating in-cluster Sidecar client
	2021/08/13 21:11:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:11:35 Serving insecurely on HTTP port: 9090
	2021/08/13 21:11:35 Starting overwatch
	
	* 
	* ==> storage-provisioner [d190ad9281e2726622d727ea75715140b8a648ca41a6b6c911e0e300947a0922] <==
	* I0813 21:11:33.244461       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:11:33.295965       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:11:33.296687       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:11:33.316175       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbd3d7cc-a02d-4c39-8593-ff7ef6900f96", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20210813210044-393438_cb1bfdfd-898f-42c7-8ebe-e956d2baf3c5 became leader
	I0813 21:11:33.316707       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:11:33.317571       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20210813210044-393438_cb1bfdfd-898f-42c7-8ebe-e956d2baf3c5!
	I0813 21:11:33.421238       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20210813210044-393438_cb1bfdfd-898f-42c7-8ebe-e956d2baf3c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210813210044-393438 -n no-preload-20210813210044-393438
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210813210044-393438 -n no-preload-20210813210044-393438: exit status 2 (301.510935ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20210813210044-393438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-7z8h9
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20210813210044-393438 describe pod metrics-server-7c784ccb57-7z8h9
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20210813210044-393438 describe pod metrics-server-7c784ccb57-7z8h9: exit status 1 (84.895727ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-7z8h9" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20210813210044-393438 describe pod metrics-server-7c784ccb57-7z8h9: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (26.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813210121-393438 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813210121-393438 --alsologtostderr -v=1: exit status 80 (2.483405963s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-different-port-20210813210121-393438 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 21:12:13.784020  435820 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:12:13.784245  435820 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:13.784258  435820 out.go:311] Setting ErrFile to fd 2...
	I0813 21:12:13.784264  435820 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:13.784390  435820 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:12:13.784597  435820 out.go:305] Setting JSON to false
	I0813 21:12:13.784628  435820 mustload.go:65] Loading cluster: default-k8s-different-port-20210813210121-393438
	I0813 21:12:13.784982  435820 config.go:177] Loaded profile config "default-k8s-different-port-20210813210121-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:12:13.785394  435820 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:12:13.785450  435820 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:13.798142  435820 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37739
	I0813 21:12:13.798766  435820 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:13.799398  435820 main.go:130] libmachine: Using API Version  1
	I0813 21:12:13.799423  435820 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:13.799857  435820 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:13.800058  435820 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetState
	I0813 21:12:13.803662  435820 host.go:66] Checking if "default-k8s-different-port-20210813210121-393438" exists ...
	I0813 21:12:13.804127  435820 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:12:13.804173  435820 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:13.815468  435820 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40779
	I0813 21:12:13.815891  435820 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:13.816323  435820 main.go:130] libmachine: Using API Version  1
	I0813 21:12:13.816349  435820 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:13.816701  435820 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:13.816910  435820 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:12:13.817495  435820 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-different-port-20210813210121-393438 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 21:12:13.820120  435820 out.go:177] * Pausing node default-k8s-different-port-20210813210121-393438 ... 
	I0813 21:12:13.820155  435820 host.go:66] Checking if "default-k8s-different-port-20210813210121-393438" exists ...
	I0813 21:12:13.820489  435820 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:12:13.820532  435820 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:13.831971  435820 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37559
	I0813 21:12:13.832414  435820 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:13.832918  435820 main.go:130] libmachine: Using API Version  1
	I0813 21:12:13.832962  435820 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:13.833329  435820 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:13.833545  435820 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .DriverName
	I0813 21:12:13.833784  435820 ssh_runner.go:149] Run: systemctl --version
	I0813 21:12:13.833827  435820 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHHostname
	I0813 21:12:13.839866  435820 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:12:13.840284  435820 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:d4:cb", ip: ""} in network mk-default-k8s-different-port-20210813210121-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:05:03 +0000 UTC Type:0 Mac:52:54:00:49:d4:cb Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:default-k8s-different-port-20210813210121-393438 Clientid:01:52:54:00:49:d4:cb}
	I0813 21:12:13.840323  435820 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) DBG | domain default-k8s-different-port-20210813210121-393438 has defined IP address 192.168.39.163 and MAC address 52:54:00:49:d4:cb in network mk-default-k8s-different-port-20210813210121-393438
	I0813 21:12:13.840436  435820 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHPort
	I0813 21:12:13.840604  435820 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHKeyPath
	I0813 21:12:13.840778  435820 main.go:130] libmachine: (default-k8s-different-port-20210813210121-393438) Calling .GetSSHUsername
	I0813 21:12:13.840914  435820 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/default-k8s-different-port-20210813210121-393438/id_rsa Username:docker}
	I0813 21:12:13.950472  435820 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:13.964458  435820 pause.go:50] kubelet running: true
	I0813 21:12:13.964543  435820 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:12:14.233791  435820 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:12:14.233904  435820 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:12:14.370003  435820 cri.go:76] found id: "9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e"
	I0813 21:12:14.370041  435820 cri.go:76] found id: "db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5"
	I0813 21:12:14.370050  435820 cri.go:76] found id: "bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47"
	I0813 21:12:14.370058  435820 cri.go:76] found id: "f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054"
	I0813 21:12:14.370065  435820 cri.go:76] found id: "74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8"
	I0813 21:12:14.370073  435820 cri.go:76] found id: "ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22"
	I0813 21:12:14.370079  435820 cri.go:76] found id: "e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702"
	I0813 21:12:14.370090  435820 cri.go:76] found id: "e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0"
	I0813 21:12:14.370097  435820 cri.go:76] found id: "13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55"
	I0813 21:12:14.370115  435820 cri.go:76] found id: ""
	I0813 21:12:14.370170  435820 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:12:14.423721  435820 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55","pid":6862,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55/rootfs","created":"2021-08-13T21:12:01.927876598Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5","pid":6780,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5","rootfs":"/run/containerd/io.containe
rd.runtime.v2.task/k8s.io/4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5/rootfs","created":"2021-08-13T21:12:01.073904418Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-6tdsg_6860364e-45f9-41da-a2c3-763cf331586e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1","pid":5325,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1/rootfs","created":"2021-08-13T21:11:30.463540272Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"57df03b29f44a606ee6b29d441
a027d05351a56bed9ad4f77e4b57d3fe84c0b1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210813210121-393438_ebdfbc476119fe5e49f487dd0d9e6f26"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4","pid":6512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4/rootfs","created":"2021-08-13T21:11:59.500857183Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-qq4n6_c5878f91-7def-4945-96e9-d0ffc69ebaa4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5b72edbb20bb5e5a9217171b5f2d
144df3402ddc040386c920fee225cfe0699a","pid":5299,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a/rootfs","created":"2021-08-13T21:11:30.414133201Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210813210121-393438_870be8126843e1670189973bbbfb2843"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8","pid":5487,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8","rootfs":"/run/containerd/io.containerd.
runtime.v2.task/k8s.io/74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8/rootfs","created":"2021-08-13T21:11:31.514504923Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e","pid":6820,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e/rootfs","created":"2021-08-13T21:12:01.516662219Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c"},"o
wner":"root"},{"ociVersion":"1.0.2-dev","id":"bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47","pid":6008,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47/rootfs","created":"2021-08-13T21:11:54.952484263Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7","pid":6710,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd7ecd405185b04473fbc3fd25e1
2780d841b5ee9e0dc78fa47325e6711f6b7/rootfs","created":"2021-08-13T21:12:00.682015924Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-mk55h_c4b71b47-1c44-4b09-b5ec-4a9708e68adb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720","pid":5377,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720/rootfs","created":"2021-08-13T21:11:30.854840379Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720","io.kuber
netes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210813210121-393438_ac1b725b17613b4ea6ee480208087eae"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c","pid":6113,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c/rootfs","created":"2021-08-13T21:11:55.665584598Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-lzm4s_289230be-e90a-464b-adf7-4af4147996a6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf","pid":53
62,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf/rootfs","created":"2021-08-13T21:11:30.654799645Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210813210121-393438_b09c9bc51fdda31cf3990d3d04b0dc8d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5","pid":6285,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db7a83df618b3ea293c6b7bf50ecbd65
7ba0dffa31d3b88a503cb689771f0fd5/rootfs","created":"2021-08-13T21:11:56.993894185Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702","pid":5420,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702/rootfs","created":"2021-08-13T21:11:31.127491894Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec76c816427a
d33994b2055617e82d025e6d6ca3d54b5738666193855befdd22","pid":5476,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22/rootfs","created":"2021-08-13T21:11:31.543395155Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206","pid":5841,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206/rootfs","created":"2021-08-1
3T21:11:54.411180806Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-kkw6b_34e60b1a-3b8e-44fd-9e60-7f762f693943"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054","pid":5540,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054/rootfs","created":"2021-08-13T21:11:32.030576007Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fc1b3
e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c","pid":6531,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c/rootfs","created":"2021-08-13T21:12:00.19797689Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_8c6aee04-a20c-445c-835a-5dc57e81b7f5"},"owner":"root"}]
	I0813 21:12:14.424021  435820 cri.go:113] list returned 18 containers
	I0813 21:12:14.424048  435820 cri.go:116] container: {ID:13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55 Status:running}
	I0813 21:12:14.424077  435820 cri.go:116] container: {ID:4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5 Status:running}
	I0813 21:12:14.424085  435820 cri.go:118] skipping 4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5 - not in ps
	I0813 21:12:14.424091  435820 cri.go:116] container: {ID:57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1 Status:running}
	I0813 21:12:14.424098  435820 cri.go:118] skipping 57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1 - not in ps
	I0813 21:12:14.424108  435820 cri.go:116] container: {ID:5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4 Status:running}
	I0813 21:12:14.424115  435820 cri.go:118] skipping 5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4 - not in ps
	I0813 21:12:14.424122  435820 cri.go:116] container: {ID:5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a Status:running}
	I0813 21:12:14.424129  435820 cri.go:118] skipping 5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a - not in ps
	I0813 21:12:14.424139  435820 cri.go:116] container: {ID:74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8 Status:running}
	I0813 21:12:14.424146  435820 cri.go:116] container: {ID:9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e Status:running}
	I0813 21:12:14.424155  435820 cri.go:116] container: {ID:bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47 Status:running}
	I0813 21:12:14.424162  435820 cri.go:116] container: {ID:bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7 Status:running}
	I0813 21:12:14.424172  435820 cri.go:118] skipping bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7 - not in ps
	I0813 21:12:14.424177  435820 cri.go:116] container: {ID:cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720 Status:running}
	I0813 21:12:14.424188  435820 cri.go:118] skipping cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720 - not in ps
	I0813 21:12:14.424195  435820 cri.go:116] container: {ID:d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c Status:running}
	I0813 21:12:14.424211  435820 cri.go:118] skipping d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c - not in ps
	I0813 21:12:14.424219  435820 cri.go:116] container: {ID:d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf Status:running}
	I0813 21:12:14.424232  435820 cri.go:118] skipping d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf - not in ps
	I0813 21:12:14.424240  435820 cri.go:116] container: {ID:db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5 Status:running}
	I0813 21:12:14.424247  435820 cri.go:116] container: {ID:e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702 Status:running}
	I0813 21:12:14.424256  435820 cri.go:116] container: {ID:ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22 Status:running}
	I0813 21:12:14.424262  435820 cri.go:116] container: {ID:f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206 Status:running}
	I0813 21:12:14.424272  435820 cri.go:118] skipping f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206 - not in ps
	I0813 21:12:14.424278  435820 cri.go:116] container: {ID:f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054 Status:running}
	I0813 21:12:14.424287  435820 cri.go:116] container: {ID:fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c Status:running}
	I0813 21:12:14.424295  435820 cri.go:118] skipping fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c - not in ps
	I0813 21:12:14.424348  435820 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55
	I0813 21:12:14.451518  435820 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55 74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8
	I0813 21:12:14.471600  435820 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55 74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:14Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:12:14.748094  435820 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:14.761444  435820 pause.go:50] kubelet running: false
	I0813 21:12:14.761505  435820 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:12:14.958357  435820 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:12:14.958436  435820 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:12:15.096592  435820 cri.go:76] found id: "9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e"
	I0813 21:12:15.096620  435820 cri.go:76] found id: "db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5"
	I0813 21:12:15.096626  435820 cri.go:76] found id: "bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47"
	I0813 21:12:15.096632  435820 cri.go:76] found id: "f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054"
	I0813 21:12:15.096638  435820 cri.go:76] found id: "74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8"
	I0813 21:12:15.096644  435820 cri.go:76] found id: "ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22"
	I0813 21:12:15.096649  435820 cri.go:76] found id: "e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702"
	I0813 21:12:15.096655  435820 cri.go:76] found id: "e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0"
	I0813 21:12:15.096660  435820 cri.go:76] found id: "13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55"
	I0813 21:12:15.096669  435820 cri.go:76] found id: ""
	I0813 21:12:15.096780  435820 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:12:15.139574  435820 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55","pid":6862,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55/rootfs","created":"2021-08-13T21:12:01.927876598Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5","pid":6780,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5","rootfs":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5/rootfs","created":"2021-08-13T21:12:01.073904418Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-6tdsg_6860364e-45f9-41da-a2c3-763cf331586e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1","pid":5325,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1/rootfs","created":"2021-08-13T21:11:30.463540272Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"57df03b29f44a606ee6b29d441a
027d05351a56bed9ad4f77e4b57d3fe84c0b1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210813210121-393438_ebdfbc476119fe5e49f487dd0d9e6f26"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4","pid":6512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4/rootfs","created":"2021-08-13T21:11:59.500857183Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-qq4n6_c5878f91-7def-4945-96e9-d0ffc69ebaa4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5b72edbb20bb5e5a9217171b5f2d1
44df3402ddc040386c920fee225cfe0699a","pid":5299,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a/rootfs","created":"2021-08-13T21:11:30.414133201Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210813210121-393438_870be8126843e1670189973bbbfb2843"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8","pid":5487,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8","rootfs":"/run/containerd/io.containerd.r
untime.v2.task/k8s.io/74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8/rootfs","created":"2021-08-13T21:11:31.514504923Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e","pid":6820,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e/rootfs","created":"2021-08-13T21:12:01.516662219Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c"},"ow
ner":"root"},{"ociVersion":"1.0.2-dev","id":"bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47","pid":6008,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47/rootfs","created":"2021-08-13T21:11:54.952484263Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7","pid":6710,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd7ecd405185b04473fbc3fd25e12
780d841b5ee9e0dc78fa47325e6711f6b7/rootfs","created":"2021-08-13T21:12:00.682015924Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-mk55h_c4b71b47-1c44-4b09-b5ec-4a9708e68adb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720","pid":5377,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720/rootfs","created":"2021-08-13T21:11:30.854840379Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720","io.kubern
etes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210813210121-393438_ac1b725b17613b4ea6ee480208087eae"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c","pid":6113,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c/rootfs","created":"2021-08-13T21:11:55.665584598Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-lzm4s_289230be-e90a-464b-adf7-4af4147996a6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf","pid":536
2,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf/rootfs","created":"2021-08-13T21:11:30.654799645Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210813210121-393438_b09c9bc51fdda31cf3990d3d04b0dc8d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5","pid":6285,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db7a83df618b3ea293c6b7bf50ecbd657
ba0dffa31d3b88a503cb689771f0fd5/rootfs","created":"2021-08-13T21:11:56.993894185Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702","pid":5420,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702/rootfs","created":"2021-08-13T21:11:31.127491894Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec76c816427ad
33994b2055617e82d025e6d6ca3d54b5738666193855befdd22","pid":5476,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22/rootfs","created":"2021-08-13T21:11:31.543395155Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206","pid":5841,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206/rootfs","created":"2021-08-13
T21:11:54.411180806Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-kkw6b_34e60b1a-3b8e-44fd-9e60-7f762f693943"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054","pid":5540,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054/rootfs","created":"2021-08-13T21:11:32.030576007Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fc1b3e
504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c","pid":6531,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c/rootfs","created":"2021-08-13T21:12:00.19797689Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_8c6aee04-a20c-445c-835a-5dc57e81b7f5"},"owner":"root"}]
	I0813 21:12:15.139828  435820 cri.go:113] list returned 18 containers
	I0813 21:12:15.139846  435820 cri.go:116] container: {ID:13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55 Status:paused}
	I0813 21:12:15.139862  435820 cri.go:122] skipping {13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55 paused}: state = "paused", want "running"
	I0813 21:12:15.139881  435820 cri.go:116] container: {ID:4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5 Status:running}
	I0813 21:12:15.139889  435820 cri.go:118] skipping 4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5 - not in ps
	I0813 21:12:15.139894  435820 cri.go:116] container: {ID:57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1 Status:running}
	I0813 21:12:15.139902  435820 cri.go:118] skipping 57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1 - not in ps
	I0813 21:12:15.139907  435820 cri.go:116] container: {ID:5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4 Status:running}
	I0813 21:12:15.139914  435820 cri.go:118] skipping 5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4 - not in ps
	I0813 21:12:15.139919  435820 cri.go:116] container: {ID:5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a Status:running}
	I0813 21:12:15.139926  435820 cri.go:118] skipping 5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a - not in ps
	I0813 21:12:15.139931  435820 cri.go:116] container: {ID:74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8 Status:running}
	I0813 21:12:15.139938  435820 cri.go:116] container: {ID:9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e Status:running}
	I0813 21:12:15.139944  435820 cri.go:116] container: {ID:bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47 Status:running}
	I0813 21:12:15.139951  435820 cri.go:116] container: {ID:bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7 Status:running}
	I0813 21:12:15.139958  435820 cri.go:118] skipping bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7 - not in ps
	I0813 21:12:15.139963  435820 cri.go:116] container: {ID:cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720 Status:running}
	I0813 21:12:15.139969  435820 cri.go:118] skipping cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720 - not in ps
	I0813 21:12:15.139974  435820 cri.go:116] container: {ID:d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c Status:running}
	I0813 21:12:15.139983  435820 cri.go:118] skipping d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c - not in ps
	I0813 21:12:15.139988  435820 cri.go:116] container: {ID:d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf Status:running}
	I0813 21:12:15.139995  435820 cri.go:118] skipping d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf - not in ps
	I0813 21:12:15.140000  435820 cri.go:116] container: {ID:db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5 Status:running}
	I0813 21:12:15.140006  435820 cri.go:116] container: {ID:e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702 Status:running}
	I0813 21:12:15.140013  435820 cri.go:116] container: {ID:ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22 Status:running}
	I0813 21:12:15.140022  435820 cri.go:116] container: {ID:f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206 Status:running}
	I0813 21:12:15.140029  435820 cri.go:118] skipping f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206 - not in ps
	I0813 21:12:15.140035  435820 cri.go:116] container: {ID:f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054 Status:running}
	I0813 21:12:15.140040  435820 cri.go:116] container: {ID:fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c Status:running}
	I0813 21:12:15.140046  435820 cri.go:118] skipping fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c - not in ps
	I0813 21:12:15.140098  435820 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8
	I0813 21:12:15.163844  435820 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8 9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e
	I0813 21:12:15.185220  435820 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8 9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:15Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:12:15.725931  435820 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:15.740967  435820 pause.go:50] kubelet running: false
	I0813 21:12:15.741060  435820 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:12:15.947834  435820 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:12:15.947949  435820 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:12:16.098239  435820 cri.go:76] found id: "9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e"
	I0813 21:12:16.098270  435820 cri.go:76] found id: "db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5"
	I0813 21:12:16.098277  435820 cri.go:76] found id: "bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47"
	I0813 21:12:16.098284  435820 cri.go:76] found id: "f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054"
	I0813 21:12:16.098289  435820 cri.go:76] found id: "74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8"
	I0813 21:12:16.098296  435820 cri.go:76] found id: "ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22"
	I0813 21:12:16.098302  435820 cri.go:76] found id: "e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702"
	I0813 21:12:16.098307  435820 cri.go:76] found id: "e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0"
	I0813 21:12:16.098313  435820 cri.go:76] found id: "13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55"
	I0813 21:12:16.098323  435820 cri.go:76] found id: ""
	I0813 21:12:16.098369  435820 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:12:16.151572  435820 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55","pid":6862,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55/rootfs","created":"2021-08-13T21:12:01.927876598Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5","pid":6780,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5","rootfs":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5/rootfs","created":"2021-08-13T21:12:01.073904418Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-6tdsg_6860364e-45f9-41da-a2c3-763cf331586e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1","pid":5325,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1/rootfs","created":"2021-08-13T21:11:30.463540272Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"57df03b29f44a606ee6b29d441a
027d05351a56bed9ad4f77e4b57d3fe84c0b1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210813210121-393438_ebdfbc476119fe5e49f487dd0d9e6f26"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4","pid":6512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4/rootfs","created":"2021-08-13T21:11:59.500857183Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-qq4n6_c5878f91-7def-4945-96e9-d0ffc69ebaa4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5b72edbb20bb5e5a9217171b5f2d1
44df3402ddc040386c920fee225cfe0699a","pid":5299,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a/rootfs","created":"2021-08-13T21:11:30.414133201Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210813210121-393438_870be8126843e1670189973bbbfb2843"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8","pid":5487,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8","rootfs":"/run/containerd/io.containerd.ru
ntime.v2.task/k8s.io/74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8/rootfs","created":"2021-08-13T21:11:31.514504923Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e","pid":6820,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e/rootfs","created":"2021-08-13T21:12:01.516662219Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c"},"own
er":"root"},{"ociVersion":"1.0.2-dev","id":"bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47","pid":6008,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47/rootfs","created":"2021-08-13T21:11:54.952484263Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7","pid":6710,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd7ecd405185b04473fbc3fd25e127
80d841b5ee9e0dc78fa47325e6711f6b7/rootfs","created":"2021-08-13T21:12:00.682015924Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-mk55h_c4b71b47-1c44-4b09-b5ec-4a9708e68adb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720","pid":5377,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720/rootfs","created":"2021-08-13T21:11:30.854840379Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720","io.kuberne
tes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210813210121-393438_ac1b725b17613b4ea6ee480208087eae"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c","pid":6113,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c/rootfs","created":"2021-08-13T21:11:55.665584598Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-lzm4s_289230be-e90a-464b-adf7-4af4147996a6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf","pid":5362
,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf/rootfs","created":"2021-08-13T21:11:30.654799645Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210813210121-393438_b09c9bc51fdda31cf3990d3d04b0dc8d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5","pid":6285,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db7a83df618b3ea293c6b7bf50ecbd657b
a0dffa31d3b88a503cb689771f0fd5/rootfs","created":"2021-08-13T21:11:56.993894185Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702","pid":5420,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702/rootfs","created":"2021-08-13T21:11:31.127491894Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec76c816427ad3
3994b2055617e82d025e6d6ca3d54b5738666193855befdd22","pid":5476,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22/rootfs","created":"2021-08-13T21:11:31.543395155Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206","pid":5841,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206/rootfs","created":"2021-08-13T
21:11:54.411180806Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-kkw6b_34e60b1a-3b8e-44fd-9e60-7f762f693943"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054","pid":5540,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054/rootfs","created":"2021-08-13T21:11:32.030576007Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fc1b3e5
04b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c","pid":6531,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c/rootfs","created":"2021-08-13T21:12:00.19797689Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_8c6aee04-a20c-445c-835a-5dc57e81b7f5"},"owner":"root"}]
	I0813 21:12:16.151753  435820 cri.go:113] list returned 18 containers
	I0813 21:12:16.151765  435820 cri.go:116] container: {ID:13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55 Status:paused}
	I0813 21:12:16.151776  435820 cri.go:122] skipping {13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55 paused}: state = "paused", want "running"
	I0813 21:12:16.151785  435820 cri.go:116] container: {ID:4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5 Status:running}
	I0813 21:12:16.151790  435820 cri.go:118] skipping 4ce0cc7ececda4632636fbb789baa41e69becd1d1c59c598e0a9183a044ee4f5 - not in ps
	I0813 21:12:16.151796  435820 cri.go:116] container: {ID:57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1 Status:running}
	I0813 21:12:16.151801  435820 cri.go:118] skipping 57df03b29f44a606ee6b29d441a027d05351a56bed9ad4f77e4b57d3fe84c0b1 - not in ps
	I0813 21:12:16.151804  435820 cri.go:116] container: {ID:5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4 Status:running}
	I0813 21:12:16.151809  435820 cri.go:118] skipping 5a7a8a488388f11571b3a88a7f4ab45e6b081be5d16159d4bf91b992dc1084b4 - not in ps
	I0813 21:12:16.151812  435820 cri.go:116] container: {ID:5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a Status:running}
	I0813 21:12:16.151816  435820 cri.go:118] skipping 5b72edbb20bb5e5a9217171b5f2d144df3402ddc040386c920fee225cfe0699a - not in ps
	I0813 21:12:16.151820  435820 cri.go:116] container: {ID:74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8 Status:paused}
	I0813 21:12:16.151824  435820 cri.go:122] skipping {74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8 paused}: state = "paused", want "running"
	I0813 21:12:16.151828  435820 cri.go:116] container: {ID:9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e Status:running}
	I0813 21:12:16.151832  435820 cri.go:116] container: {ID:bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47 Status:running}
	I0813 21:12:16.151836  435820 cri.go:116] container: {ID:bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7 Status:running}
	I0813 21:12:16.151840  435820 cri.go:118] skipping bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7 - not in ps
	I0813 21:12:16.151844  435820 cri.go:116] container: {ID:cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720 Status:running}
	I0813 21:12:16.151848  435820 cri.go:118] skipping cebfd1f671bb7996336c1c5867484f3d6a8e9a6490860993daa89c6f63942720 - not in ps
	I0813 21:12:16.151851  435820 cri.go:116] container: {ID:d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c Status:running}
	I0813 21:12:16.151855  435820 cri.go:118] skipping d9b8ac5fbc2b548979675ee95be74e003022c855baebe17269c350230b31a56c - not in ps
	I0813 21:12:16.151858  435820 cri.go:116] container: {ID:d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf Status:running}
	I0813 21:12:16.151862  435820 cri.go:118] skipping d9bd2c2bce36058567504b1f746b499177de2a767aa324cc228d3b5da4edd8bf - not in ps
	I0813 21:12:16.151866  435820 cri.go:116] container: {ID:db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5 Status:running}
	I0813 21:12:16.151872  435820 cri.go:116] container: {ID:e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702 Status:running}
	I0813 21:12:16.151882  435820 cri.go:116] container: {ID:ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22 Status:running}
	I0813 21:12:16.151889  435820 cri.go:116] container: {ID:f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206 Status:running}
	I0813 21:12:16.151899  435820 cri.go:118] skipping f06a4e1dc0256a231dc9559808c57c12eed9f5f746223b0b0d2559cca984d206 - not in ps
	I0813 21:12:16.151903  435820 cri.go:116] container: {ID:f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054 Status:running}
	I0813 21:12:16.151907  435820 cri.go:116] container: {ID:fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c Status:running}
	I0813 21:12:16.151912  435820 cri.go:118] skipping fc1b3e504b6999415eb682d05d58853842182bd77b78ade0d212a13f5d1b676c - not in ps
	I0813 21:12:16.151971  435820 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e
	I0813 21:12:16.172464  435820 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47
	I0813 21:12:16.197995  435820 out.go:177] 
	W0813 21:12:16.198166  435820 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:16Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:16Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 21:12:16.198197  435820 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 21:12:16.202024  435820 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 21:12:16.203733  435820 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813210121-393438 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210121-393438 -n default-k8s-different-port-20210813210121-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210121-393438 -n default-k8s-different-port-20210813210121-393438: exit status 2 (266.078575ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210813210121-393438 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p default-k8s-different-port-20210813210121-393438 logs -n 25: exit status 110 (11.237191344s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | disable-driver-mounts-20210813210121-393438      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:21 UTC | Fri, 13 Aug 2021 21:01:21 UTC |
	|         | disable-driver-mounts-20210813210121-393438       |                                                  |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:53 UTC | Fri, 13 Aug 2021 21:02:44 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:53 UTC | Fri, 13 Aug 2021 21:02:54 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:21 UTC | Fri, 13 Aug 2021 21:03:08 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:44 UTC | Fri, 13 Aug 2021 21:03:16 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:17 UTC | Fri, 13 Aug 2021 21:03:18 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:15 UTC | Fri, 13 Aug 2021 21:03:20 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:28 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:29 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:54 UTC | Fri, 13 Aug 2021 21:04:26 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:04:27 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:18 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:28 UTC | Fri, 13 Aug 2021 21:05:01 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:52 UTC | Fri, 13 Aug 2021 21:11:53 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                  | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:56 UTC | Fri, 13 Aug 2021 21:11:57 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:11:59 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                  | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:58 UTC | Fri, 13 Aug 2021 21:12:00 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:01 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:02 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:13 UTC | Fri, 13 Aug 2021 21:12:13 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:12:02
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:12:02.440919  435569 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:12:02.441013  435569 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:02.441018  435569 out.go:311] Setting ErrFile to fd 2...
	I0813 21:12:02.441023  435569 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:02.441169  435569 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:12:02.441563  435569 out.go:305] Setting JSON to false
	I0813 21:12:02.480588  435569 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6885,"bootTime":1628882238,"procs":200,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:12:02.480745  435569 start.go:121] virtualization: kvm guest
	I0813 21:12:02.482750  435569 out.go:177] * [newest-cni-20210813211202-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:12:02.484177  435569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:12:02.482925  435569 notify.go:169] Checking for updates...
	I0813 21:12:02.485531  435569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:12:02.486862  435569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:12:02.488153  435569 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:12:02.488757  435569 config.go:177] Loaded profile config "default-k8s-different-port-20210813210121-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:12:02.488905  435569 config.go:177] Loaded profile config "embed-certs-20210813210115-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:12:02.489031  435569 config.go:177] Loaded profile config "old-k8s-version-20210813205952-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 21:12:02.489086  435569 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:12:02.521538  435569 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 21:12:02.521562  435569 start.go:278] selected driver: kvm2
	I0813 21:12:02.521567  435569 start.go:751] validating driver "kvm2" against <nil>
	I0813 21:12:02.521584  435569 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:12:02.523028  435569 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:12:02.523250  435569 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:12:02.537216  435569 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:12:02.537273  435569 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	W0813 21:12:02.537294  435569 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0813 21:12:02.537465  435569 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 21:12:02.537497  435569 cni.go:93] Creating CNI manager for ""
	I0813 21:12:02.537504  435569 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:12:02.537513  435569 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 21:12:02.537526  435569 start_flags.go:277] config:
	{Name:newest-cni-20210813211202-393438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813211202-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:12:02.537684  435569 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:12:02.539702  435569 out.go:177] * Starting control plane node newest-cni-20210813211202-393438 in cluster newest-cni-20210813211202-393438
	I0813 21:12:02.539729  435569 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:12:02.539787  435569 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0813 21:12:02.539837  435569 cache.go:56] Caching tarball of preloaded images
	I0813 21:12:02.540011  435569 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 21:12:02.540046  435569 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0813 21:12:02.540155  435569 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/config.json ...
	I0813 21:12:02.540180  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/config.json: {Name:mk93f53330c0201aab4d93f29b753ec30fd29552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:02.540343  435569 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:12:02.540371  435569 start.go:313] acquiring machines lock for newest-cni-20210813211202-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:12:02.540424  435569 start.go:317] acquired machines lock for "newest-cni-20210813211202-393438" in 36.833µs
	I0813 21:12:02.540453  435569 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210813211202-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813211202-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 Kubern
etesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:12:02.540543  435569 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 21:11:58.581342  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:00.588539  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:02.171246  434036 system_pods.go:86] 6 kube-system pods found
	I0813 21:12:02.171274  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:02.171282  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:02.171289  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:02.171298  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:02.171310  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:02.171320  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:02.171342  434036 retry.go:31] will retry after 1.341783893s: missing components: etcd, kube-apiserver
	I0813 21:12:03.521720  434036 system_pods.go:86] 7 kube-system pods found
	I0813 21:12:03.521755  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:03.521763  434036 system_pods.go:89] "etcd-old-k8s-version-20210813205952-393438" [1ad2fd16-fc7b-11eb-a3a8-525400553b5e] Pending
	I0813 21:12:03.521769  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:03.521775  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:03.521782  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:03.521794  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:03.521825  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:03.521847  434036 retry.go:31] will retry after 1.876813009s: missing components: etcd, kube-apiserver
	I0813 21:12:05.405867  434036 system_pods.go:86] 7 kube-system pods found
	I0813 21:12:05.405905  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:05.405914  434036 system_pods.go:89] "etcd-old-k8s-version-20210813205952-393438" [1ad2fd16-fc7b-11eb-a3a8-525400553b5e] Pending
	I0813 21:12:05.405921  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:05.405928  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:05.405934  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:05.405947  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:05.405956  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:05.405977  434036 retry.go:31] will retry after 2.6934314s: missing components: etcd, kube-apiserver
	I0813 21:12:02.542365  435569 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 21:12:02.542518  435569 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:12:02.542566  435569 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:02.553296  435569 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0813 21:12:02.553750  435569 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:02.554251  435569 main.go:130] libmachine: Using API Version  1
	I0813 21:12:02.554276  435569 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:02.554656  435569 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:02.554891  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetMachineName
	I0813 21:12:02.555033  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:12:02.555176  435569 start.go:160] libmachine.API.Create for "newest-cni-20210813211202-393438" (driver="kvm2")
	I0813 21:12:02.555208  435569 client.go:168] LocalClient.Create starting
	I0813 21:12:02.555246  435569 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 21:12:02.555274  435569 main.go:130] libmachine: Decoding PEM data...
	I0813 21:12:02.555297  435569 main.go:130] libmachine: Parsing certificate...
	I0813 21:12:02.555426  435569 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 21:12:02.555452  435569 main.go:130] libmachine: Decoding PEM data...
	I0813 21:12:02.555464  435569 main.go:130] libmachine: Parsing certificate...
	I0813 21:12:02.555503  435569 main.go:130] libmachine: Running pre-create checks...
	I0813 21:12:02.555514  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .PreCreateCheck
	I0813 21:12:02.555872  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetConfigRaw
	I0813 21:12:02.556302  435569 main.go:130] libmachine: Creating machine...
	I0813 21:12:02.556322  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Create
	I0813 21:12:02.556447  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Creating KVM machine...
	I0813 21:12:02.559307  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found existing default KVM network
	I0813 21:12:02.561095  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:02.560929  435593 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a9:fe:39}}
	I0813 21:12:02.561913  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:02.561849  435593 network.go:240] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fc:46:2e}}
	I0813 21:12:02.563382  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:02.563270  435593 network.go:288] reserving subnet 192.168.61.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.61.0:0xc0000a85d8] misses:0}
	I0813 21:12:02.563448  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:02.563309  435593 network.go:235] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 21:12:02.589019  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | trying to create private KVM network mk-newest-cni-20210813211202-393438 192.168.61.0/24...
	I0813 21:12:02.847187  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | private KVM network mk-newest-cni-20210813211202-393438 192.168.61.0/24 created
	I0813 21:12:02.847253  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:02.847124  435593 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:12:02.847286  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438 ...
	I0813 21:12:02.847333  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 21:12:02.847369  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 21:12:03.066852  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:03.066718  435593 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa...
	I0813 21:12:03.150883  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:03.150762  435593 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/newest-cni-20210813211202-393438.rawdisk...
	I0813 21:12:03.150920  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Writing magic tar header
	I0813 21:12:03.150940  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Writing SSH key tar header
	I0813 21:12:03.150974  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:03.150871  435593 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438 ...
	I0813 21:12:03.151027  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438
	I0813 21:12:03.152513  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438 (perms=drwx------)
	I0813 21:12:03.152549  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 21:12:03.152572  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:12:03.152589  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337
	I0813 21:12:03.152607  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 21:12:03.152619  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins
	I0813 21:12:03.152635  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 21:12:03.152669  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home
	I0813 21:12:03.152725  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 21:12:03.152759  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Skipping /home - not owner
	I0813 21:12:03.152804  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 21:12:03.152828  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 21:12:03.152848  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 21:12:03.152863  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Creating domain...
	I0813 21:12:03.175697  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:a5:b8:4a in network default
	I0813 21:12:03.176263  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:03.176297  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Ensuring networks are active...
	I0813 21:12:03.178718  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Ensuring network default is active
	I0813 21:12:03.179049  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Ensuring network mk-newest-cni-20210813211202-393438 is active
	I0813 21:12:03.179567  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Getting domain xml...
	I0813 21:12:03.181463  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Creating domain...
	I0813 21:12:03.653418  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Waiting to get IP...
	I0813 21:12:03.654770  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:03.655326  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:03.655356  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:03.655270  435593 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 21:12:03.919773  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:03.920320  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:03.920354  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:03.920243  435593 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 21:12:04.302739  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:04.303182  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:04.303216  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:04.303132  435593 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 21:12:04.727655  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:04.728164  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:04.728197  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:04.728108  435593 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 21:12:05.202828  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:05.203392  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:05.203422  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:05.203328  435593 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 21:12:05.791972  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:05.792487  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:05.792518  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:05.792429  435593 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 21:12:06.627792  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:06.628345  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:06.628381  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:06.628280  435593 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 21:12:07.375959  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:07.376395  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:07.376430  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:07.376325  435593 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 21:12:03.080800  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:05.081239  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:07.083425  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:08.106986  434036 system_pods.go:86] 7 kube-system pods found
	I0813 21:12:08.107015  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:08.107023  434036 system_pods.go:89] "etcd-old-k8s-version-20210813205952-393438" [1ad2fd16-fc7b-11eb-a3a8-525400553b5e] Pending
	I0813 21:12:08.107029  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:08.107036  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:08.107044  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:08.107056  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:08.107066  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:08.107088  434036 retry.go:31] will retry after 2.494582248s: missing components: etcd, kube-apiserver
	I0813 21:12:10.608633  434036 system_pods.go:86] 7 kube-system pods found
	I0813 21:12:10.608661  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608669  434036 system_pods.go:89] "etcd-old-k8s-version-20210813205952-393438" [1ad2fd16-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608677  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608683  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608689  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608699  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:10.608709  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608730  434036 retry.go:31] will retry after 3.420895489s: missing components: kube-apiserver
	I0813 21:12:08.365541  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:08.365958  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:08.365989  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:08.365915  435593 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 21:12:09.557147  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:09.557589  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:09.557632  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:09.557523  435593 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 21:12:11.236771  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:11.237277  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:11.237308  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:11.237206  435593 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 21:12:09.582507  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:12.083780  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	e6c135b981d86       523cad1a4df73       8 seconds ago       Exited              dashboard-metrics-scraper   1                   bdd7ecd405185
	13422228dfbf2       9a07b5b4bfac0       15 seconds ago      Running             kubernetes-dashboard        0                   4ce0cc7ececda
	9e8933c4e874c       6e38f40d628db       15 seconds ago      Running             storage-provisioner         0                   fc1b3e504b699
	db7a83df618b3       296a6d5035e2d       20 seconds ago      Running             coredns                     0                   d9b8ac5fbc2b5
	bb9a649072a37       adb2816ea823a       22 seconds ago      Running             kube-proxy                  0                   f06a4e1dc0256
	f1064867a5630       6be0dc1302e30       45 seconds ago      Running             kube-scheduler              0                   cebfd1f671bb7
	7426296452443       3d174f00aa39e       45 seconds ago      Running             kube-apiserver              0                   d9bd2c2bce360
	ec76c816427ad       0369cf4303ffd       45 seconds ago      Running             etcd                        0                   57df03b29f44a
	e397d877bd7e0       bc2bb319a7038       46 seconds ago      Running             kube-controller-manager     0                   5b72edbb20bb5
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 21:05:03 UTC, end at Fri 2021-08-13 21:12:17 UTC. --
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:07.716480180Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:07.718775772Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:07.721790397Z" level=info msg="CreateContainer within sandbox \"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:07.808539946Z" level=info msg="CreateContainer within sandbox \"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\""
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:07.810868440Z" level=info msg="StartContainer for \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\""
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.225107350Z" level=info msg="StartContainer for \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\" returns successfully"
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.258914743Z" level=info msg="Finish piping stderr of container \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\""
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.259059239Z" level=info msg="Finish piping stdout of container \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\""
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.260874665Z" level=info msg="TaskExit event &TaskExit{ContainerID:c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c,ID:c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c,Pid:6967,ExitStatus:1,ExitedAt:2021-08-13 21:12:08.260536292 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.324543251Z" level=info msg="shim disconnected" id=c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.324854883Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.898599333Z" level=info msg="CreateContainer within sandbox \"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.946483274Z" level=info msg="CreateContainer within sandbox \"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0\""
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.947822775Z" level=info msg="StartContainer for \"e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0\""
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.355642674Z" level=info msg="StartContainer for \"e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0\" returns successfully"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.387808603Z" level=info msg="Finish piping stderr of container \"e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0\""
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.388374499Z" level=info msg="Finish piping stdout of container \"e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0\""
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.390381331Z" level=info msg="TaskExit event &TaskExit{ContainerID:e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0,ID:e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0,Pid:7034,ExitStatus:1,ExitedAt:2021-08-13 21:12:09.389893877 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.454322140Z" level=info msg="shim disconnected" id=e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.454487960Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.917164292Z" level=info msg="RemoveContainer for \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\""
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.936096229Z" level=info msg="RemoveContainer for \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\" returns successfully"
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:11.314634042Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:11.319194800Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:11.323879712Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> coredns [db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	*                 "trace_clock=local"
	              on the kernel command line
	[Aug13 21:05] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.044495] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.002266] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1717 comm=systemd-network
	[  +0.593568] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.253024] vboxguest: loading out-of-tree module taints kernel.
	[  +0.025899] vboxguest: PCI device not found, probably running on physical hardware.
	[ +21.111329] systemd-fstab-generator[2069]: Ignoring "noauto" for root device
	[  +0.260243] systemd-fstab-generator[2100]: Ignoring "noauto" for root device
	[  +0.170967] systemd-fstab-generator[2115]: Ignoring "noauto" for root device
	[  +0.251334] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
	[  +6.496828] systemd-fstab-generator[2336]: Ignoring "noauto" for root device
	[Aug13 21:07] NFSD: Unable to end grace period: -110
	[  +3.235987] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.143742] kauditd_printk_skb: 101 callbacks suppressed
	[Aug13 21:11] systemd-fstab-generator[5170]: Ignoring "noauto" for root device
	[ +16.431282] systemd-fstab-generator[5588]: Ignoring "noauto" for root device
	[ +14.178939] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.451249] kauditd_printk_skb: 77 callbacks suppressed
	[Aug13 21:12] kauditd_printk_skb: 50 callbacks suppressed
	[  +4.942031] systemd-fstab-generator[7088]: Ignoring "noauto" for root device
	[  +0.799655] systemd-fstab-generator[7141]: Ignoring "noauto" for root device
	[  +0.986899] systemd-fstab-generator[7195]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22] <==
	* raft2021/08/13 21:11:31 INFO: a8a86752a40bcef4 switched to configuration voters=(12153077199096499956)
	2021-08-13 21:11:31.801445 W | auth: simple token is not cryptographically signed
	2021-08-13 21:11:31.828106 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-13 21:11:31.849472 I | etcdserver: a8a86752a40bcef4 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 21:11:31 INFO: a8a86752a40bcef4 switched to configuration voters=(12153077199096499956)
	2021-08-13 21:11:31.861855 I | etcdserver/membership: added member a8a86752a40bcef4 [https://192.168.39.163:2380] to cluster e373eafcd5903e51
	2021-08-13 21:11:31.886397 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 21:11:31.886568 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 21:11:31.886667 I | embed: listening for peers on 192.168.39.163:2380
	raft2021/08/13 21:11:32 INFO: a8a86752a40bcef4 is starting a new election at term 1
	raft2021/08/13 21:11:32 INFO: a8a86752a40bcef4 became candidate at term 2
	raft2021/08/13 21:11:32 INFO: a8a86752a40bcef4 received MsgVoteResp from a8a86752a40bcef4 at term 2
	raft2021/08/13 21:11:32 INFO: a8a86752a40bcef4 became leader at term 2
	raft2021/08/13 21:11:32 INFO: raft.node: a8a86752a40bcef4 elected leader a8a86752a40bcef4 at term 2
	2021-08-13 21:11:32.295394 I | etcdserver: published {Name:default-k8s-different-port-20210813210121-393438 ClientURLs:[https://192.168.39.163:2379]} to cluster e373eafcd5903e51
	2021-08-13 21:11:32.298167 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 21:11:32.298474 I | embed: ready to serve client requests
	2021-08-13 21:11:32.302408 I | embed: ready to serve client requests
	2021-08-13 21:11:32.308325 I | embed: serving client requests on 192.168.39.163:2379
	2021-08-13 21:11:32.308807 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 21:11:32.325368 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 21:11:32.363691 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 21:11:55.194613 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:12:01.372556 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:12:11.369964 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  21:12:27 up 7 min,  0 users,  load average: 1.99, 0.95, 0.42
	Linux default-k8s-different-port-20210813210121-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8] <==
	* I0813 21:11:36.963490       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 21:11:36.968089       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 21:11:37.741856       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:11:37.741968       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 21:11:37.761147       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 21:11:37.769570       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 21:11:37.769972       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:11:38.492178       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:11:38.569200       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 21:11:38.694389       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.163]
	I0813 21:11:38.695926       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 21:11:38.705719       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 21:11:39.373709       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:11:40.588382       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:11:40.678180       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:11:46.165453       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:11:53.802032       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 21:11:53.901687       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0813 21:12:01.126984       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 21:12:01.127288       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:12:01.127306       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 21:12:15.140803       1 client.go:360] parsed scheme: "passthrough"
	I0813 21:12:15.141000       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 21:12:15.141140       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702] <==
	* I0813 21:11:58.192881       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-qq4n6"
	I0813 21:11:58.918615       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 21:11:58.942677       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0813 21:11:58.978107       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:58.978815       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.028598       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:59.044163       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:59.079167       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.080031       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.096027       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.096043       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.110042       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.110544       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.118079       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.118580       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.137105       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.137996       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:59.140839       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.140148       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:59.162676       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:59.162856       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.162883       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:59.162896       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:59.205811       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-mk55h"
	I0813 21:11:59.212679       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-6tdsg"
	
	* 
	* ==> kube-proxy [bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47] <==
	* I0813 21:11:56.070946       1 node.go:172] Successfully retrieved node IP: 192.168.39.163
	I0813 21:11:56.071157       1 server_others.go:140] Detected node IP 192.168.39.163
	W0813 21:11:56.071597       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 21:11:56.338613       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:11:56.338655       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:11:56.338673       1 server_others.go:212] Using iptables Proxier.
	I0813 21:11:56.341755       1 server.go:643] Version: v1.21.3
	I0813 21:11:56.349955       1 config.go:315] Starting service config controller
	I0813 21:11:56.350464       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 21:11:56.350752       1 config.go:224] Starting endpoint slice config controller
	I0813 21:11:56.350764       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 21:11:56.393085       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 21:11:56.405925       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 21:11:56.453407       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 21:11:56.454916       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054] <==
	* E0813 21:11:36.910519       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:11:36.911024       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:11:36.911407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:11:36.911982       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:11:36.912548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:36.913029       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:36.913622       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:11:36.914028       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:11:36.914350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:11:36.914853       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:11:36.915294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:36.915652       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:11:36.919151       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:37.732850       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:11:37.733622       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:11:37.840822       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:11:37.880478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:11:37.921953       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:37.981661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:38.050116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:11:38.064172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:11:38.124858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:11:38.179975       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:11:38.215469       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0813 21:11:40.105771       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:05:03 UTC, end at Fri 2021-08-13 21:12:27 UTC. --
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:00.426449    5597 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h6b4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,V
olumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qq4n6_kube-system(c5878f91-7def-4945-96e9-d0ffc69ebaa4): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:00.426528    5597 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qq4n6" podUID=c5878f91-7def-4945-96e9-d0ffc69ebaa4
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.530092    5597 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l9rz\" (UniqueName: \"kubernetes.io/projected/c34d77cb-c710-4c58-bda3-046fff1434c4-kube-api-access-6l9rz\") pod \"c34d77cb-c710-4c58-bda3-046fff1434c4\" (UID: \"c34d77cb-c710-4c58-bda3-046fff1434c4\") "
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.530162    5597 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c34d77cb-c710-4c58-bda3-046fff1434c4-config-volume\") pod \"c34d77cb-c710-4c58-bda3-046fff1434c4\" (UID: \"c34d77cb-c710-4c58-bda3-046fff1434c4\") "
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: W0813 21:12:00.532994    5597 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/c34d77cb-c710-4c58-bda3-046fff1434c4/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.535689    5597 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c34d77cb-c710-4c58-bda3-046fff1434c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "c34d77cb-c710-4c58-bda3-046fff1434c4" (UID: "c34d77cb-c710-4c58-bda3-046fff1434c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.558817    5597 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34d77cb-c710-4c58-bda3-046fff1434c4-kube-api-access-6l9rz" (OuterVolumeSpecName: "kube-api-access-6l9rz") pod "c34d77cb-c710-4c58-bda3-046fff1434c4" (UID: "c34d77cb-c710-4c58-bda3-046fff1434c4"). InnerVolumeSpecName "kube-api-access-6l9rz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.631004    5597 reconciler.go:319] "Volume detached for volume \"kube-api-access-6l9rz\" (UniqueName: \"kubernetes.io/projected/c34d77cb-c710-4c58-bda3-046fff1434c4-kube-api-access-6l9rz\") on node \"default-k8s-different-port-20210813210121-393438\" DevicePath \"\""
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.631101    5597 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c34d77cb-c710-4c58-bda3-046fff1434c4-config-volume\") on node \"default-k8s-different-port-20210813210121-393438\" DevicePath \"\""
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:00.726859    5597 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-qq4n6" podUID=c5878f91-7def-4945-96e9-d0ffc69ebaa4
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:07.256518    5597 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod8c6aee04-a20c-445c-835a-5dc57e81b7f5\": RecentStats: unable to find data in memory cache], [\"/kubepods/burstable/podc5878f91-7def-4945-96e9-d0ffc69ebaa4\": RecentStats: unable to find data in memory cache]"
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:08.891908    5597 scope.go:111] "RemoveContainer" containerID="c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:09.903006    5597 scope.go:111] "RemoveContainer" containerID="c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:09.909072    5597 scope.go:111] "RemoveContainer" containerID="e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:09.913551    5597 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-mk55h_kubernetes-dashboard(c4b71b47-1c44-4b09-b5ec-4a9708e68adb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-mk55h" podUID=c4b71b47-1c44-4b09-b5ec-4a9708e68adb
	Aug 13 21:12:10 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:10.910040    5597 scope.go:111] "RemoveContainer" containerID="e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0"
	Aug 13 21:12:10 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:10.911370    5597 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-mk55h_kubernetes-dashboard(c4b71b47-1c44-4b09-b5ec-4a9708e68adb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-mk55h" podUID=c4b71b47-1c44-4b09-b5ec-4a9708e68adb
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:11.324586    5597 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:11.324809    5597 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:11.325202    5597 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h6b4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,V
olumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qq4n6_kube-system(c5878f91-7def-4945-96e9-d0ffc69ebaa4): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:11.325571    5597 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qq4n6" podUID=c5878f91-7def-4945-96e9-d0ffc69ebaa4
	Aug 13 21:12:14 default-k8s-different-port-20210813210121-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:12:14 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:14.210143    5597 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 21:12:14 default-k8s-different-port-20210813210121-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:12:14 default-k8s-different-port-20210813210121-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55] <==
	* 2021/08/13 21:12:02 Using namespace: kubernetes-dashboard
	2021/08/13 21:12:02 Using in-cluster config to connect to apiserver
	2021/08/13 21:12:02 Using secret token for csrf signing
	2021/08/13 21:12:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:12:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:12:02 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 21:12:02 Generating JWE encryption key
	2021/08/13 21:12:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:12:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:12:02 Initializing JWE encryption key from synchronized object
	2021/08/13 21:12:02 Creating in-cluster Sidecar client
	2021/08/13 21:12:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:12:02 Serving insecurely on HTTP port: 9090
	2021/08/13 21:12:02 Starting overwatch
	
	* 
	* ==> storage-provisioner [9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e] <==
	* I0813 21:12:01.638722       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:12:01.686348       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:12:01.686672       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:12:01.705039       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:12:01.706137       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813210121-393438_b2352404-728a-4867-b701-4d7578379b03!
	I0813 21:12:01.711209       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"92d79f95-b534-4d8c-a17a-ccb7bcffd25b", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210813210121-393438_b2352404-728a-4867-b701-4d7578379b03 became leader
	I0813 21:12:01.806399       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813210121-393438_b2352404-728a-4867-b701-4d7578379b03!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 21:12:27.229966  435942 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210121-393438 -n default-k8s-different-port-20210813210121-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210121-393438 -n default-k8s-different-port-20210813210121-393438: exit status 2 (300.914389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210813210121-393438 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p default-k8s-different-port-20210813210121-393438 logs -n 25: exit status 110 (12.380891828s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 20:59:53 UTC | Fri, 13 Aug 2021 21:02:44 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:53 UTC | Fri, 13 Aug 2021 21:02:54 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:21 UTC | Fri, 13 Aug 2021 21:03:08 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:44 UTC | Fri, 13 Aug 2021 21:03:16 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:17 UTC | Fri, 13 Aug 2021 21:03:18 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:15 UTC | Fri, 13 Aug 2021 21:03:20 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:28 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:29 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:54 UTC | Fri, 13 Aug 2021 21:04:26 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:04:27 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:18 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:28 UTC | Fri, 13 Aug 2021 21:05:01 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:52 UTC | Fri, 13 Aug 2021 21:11:53 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                  | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:56 UTC | Fri, 13 Aug 2021 21:11:57 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:11:59 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                  | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:58 UTC | Fri, 13 Aug 2021 21:12:00 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:01 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:02 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:13 UTC | Fri, 13 Aug 2021 21:12:13 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:12:23 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:12:02
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:12:02.440919  435569 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:12:02.441013  435569 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:02.441018  435569 out.go:311] Setting ErrFile to fd 2...
	I0813 21:12:02.441023  435569 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:02.441169  435569 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:12:02.441563  435569 out.go:305] Setting JSON to false
	I0813 21:12:02.480588  435569 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6885,"bootTime":1628882238,"procs":200,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:12:02.480745  435569 start.go:121] virtualization: kvm guest
	I0813 21:12:02.482750  435569 out.go:177] * [newest-cni-20210813211202-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:12:02.484177  435569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:12:02.482925  435569 notify.go:169] Checking for updates...
	I0813 21:12:02.485531  435569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:12:02.486862  435569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:12:02.488153  435569 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:12:02.488757  435569 config.go:177] Loaded profile config "default-k8s-different-port-20210813210121-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:12:02.488905  435569 config.go:177] Loaded profile config "embed-certs-20210813210115-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:12:02.489031  435569 config.go:177] Loaded profile config "old-k8s-version-20210813205952-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 21:12:02.489086  435569 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:12:02.521538  435569 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 21:12:02.521562  435569 start.go:278] selected driver: kvm2
	I0813 21:12:02.521567  435569 start.go:751] validating driver "kvm2" against <nil>
	I0813 21:12:02.521584  435569 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:12:02.523028  435569 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:12:02.523250  435569 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:12:02.537216  435569 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:12:02.537273  435569 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	W0813 21:12:02.537294  435569 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0813 21:12:02.537465  435569 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 21:12:02.537497  435569 cni.go:93] Creating CNI manager for ""
	I0813 21:12:02.537504  435569 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:12:02.537513  435569 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 21:12:02.537526  435569 start_flags.go:277] config:
	{Name:newest-cni-20210813211202-393438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813211202-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:12:02.537684  435569 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:12:02.539702  435569 out.go:177] * Starting control plane node newest-cni-20210813211202-393438 in cluster newest-cni-20210813211202-393438
	I0813 21:12:02.539729  435569 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:12:02.539787  435569 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0813 21:12:02.539837  435569 cache.go:56] Caching tarball of preloaded images
	I0813 21:12:02.540011  435569 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 21:12:02.540046  435569 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0813 21:12:02.540155  435569 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/config.json ...
	I0813 21:12:02.540180  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/config.json: {Name:mk93f53330c0201aab4d93f29b753ec30fd29552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:02.540343  435569 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:12:02.540371  435569 start.go:313] acquiring machines lock for newest-cni-20210813211202-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:12:02.540424  435569 start.go:317] acquired machines lock for "newest-cni-20210813211202-393438" in 36.833µs
	I0813 21:12:02.540453  435569 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210813211202-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813211202-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 Kubern
etesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:12:02.540543  435569 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 21:11:58.581342  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:00.588539  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:02.171246  434036 system_pods.go:86] 6 kube-system pods found
	I0813 21:12:02.171274  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:02.171282  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:02.171289  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:02.171298  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:02.171310  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:02.171320  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:02.171342  434036 retry.go:31] will retry after 1.341783893s: missing components: etcd, kube-apiserver
	I0813 21:12:03.521720  434036 system_pods.go:86] 7 kube-system pods found
	I0813 21:12:03.521755  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:03.521763  434036 system_pods.go:89] "etcd-old-k8s-version-20210813205952-393438" [1ad2fd16-fc7b-11eb-a3a8-525400553b5e] Pending
	I0813 21:12:03.521769  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:03.521775  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:03.521782  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:03.521794  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:03.521825  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:03.521847  434036 retry.go:31] will retry after 1.876813009s: missing components: etcd, kube-apiserver
	I0813 21:12:05.405867  434036 system_pods.go:86] 7 kube-system pods found
	I0813 21:12:05.405905  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:05.405914  434036 system_pods.go:89] "etcd-old-k8s-version-20210813205952-393438" [1ad2fd16-fc7b-11eb-a3a8-525400553b5e] Pending
	I0813 21:12:05.405921  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:05.405928  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:05.405934  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:05.405947  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:05.405956  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:05.405977  434036 retry.go:31] will retry after 2.6934314s: missing components: etcd, kube-apiserver
	I0813 21:12:02.542365  435569 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 21:12:02.542518  435569 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:12:02.542566  435569 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:02.553296  435569 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0813 21:12:02.553750  435569 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:02.554251  435569 main.go:130] libmachine: Using API Version  1
	I0813 21:12:02.554276  435569 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:02.554656  435569 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:02.554891  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetMachineName
	I0813 21:12:02.555033  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:12:02.555176  435569 start.go:160] libmachine.API.Create for "newest-cni-20210813211202-393438" (driver="kvm2")
	I0813 21:12:02.555208  435569 client.go:168] LocalClient.Create starting
	I0813 21:12:02.555246  435569 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 21:12:02.555274  435569 main.go:130] libmachine: Decoding PEM data...
	I0813 21:12:02.555297  435569 main.go:130] libmachine: Parsing certificate...
	I0813 21:12:02.555426  435569 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 21:12:02.555452  435569 main.go:130] libmachine: Decoding PEM data...
	I0813 21:12:02.555464  435569 main.go:130] libmachine: Parsing certificate...
	I0813 21:12:02.555503  435569 main.go:130] libmachine: Running pre-create checks...
	I0813 21:12:02.555514  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .PreCreateCheck
	I0813 21:12:02.555872  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetConfigRaw
	I0813 21:12:02.556302  435569 main.go:130] libmachine: Creating machine...
	I0813 21:12:02.556322  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Create
	I0813 21:12:02.556447  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Creating KVM machine...
	I0813 21:12:02.559307  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found existing default KVM network
	I0813 21:12:02.561095  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:02.560929  435593 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a9:fe:39}}
	I0813 21:12:02.561913  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:02.561849  435593 network.go:240] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fc:46:2e}}
	I0813 21:12:02.563382  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:02.563270  435593 network.go:288] reserving subnet 192.168.61.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.61.0:0xc0000a85d8] misses:0}
	I0813 21:12:02.563448  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:02.563309  435593 network.go:235] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 21:12:02.589019  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | trying to create private KVM network mk-newest-cni-20210813211202-393438 192.168.61.0/24...
	I0813 21:12:02.847187  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | private KVM network mk-newest-cni-20210813211202-393438 192.168.61.0/24 created
	I0813 21:12:02.847253  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:02.847124  435593 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:12:02.847286  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438 ...
	I0813 21:12:02.847333  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 21:12:02.847369  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 21:12:03.066852  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:03.066718  435593 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa...
	I0813 21:12:03.150883  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:03.150762  435593 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/newest-cni-20210813211202-393438.rawdisk...
	I0813 21:12:03.150920  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Writing magic tar header
	I0813 21:12:03.150940  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Writing SSH key tar header
	I0813 21:12:03.150974  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:03.150871  435593 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438 ...
	I0813 21:12:03.151027  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438
	I0813 21:12:03.152513  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438 (perms=drwx------)
	I0813 21:12:03.152549  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 21:12:03.152572  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:12:03.152589  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337
	I0813 21:12:03.152607  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 21:12:03.152619  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home/jenkins
	I0813 21:12:03.152635  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 21:12:03.152669  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Checking permissions on dir: /home
	I0813 21:12:03.152725  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 21:12:03.152759  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Skipping /home - not owner
	I0813 21:12:03.152804  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 21:12:03.152828  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 21:12:03.152848  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 21:12:03.152863  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Creating domain...
	I0813 21:12:03.175697  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:a5:b8:4a in network default
	I0813 21:12:03.176263  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:03.176297  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Ensuring networks are active...
	I0813 21:12:03.178718  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Ensuring network default is active
	I0813 21:12:03.179049  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Ensuring network mk-newest-cni-20210813211202-393438 is active
	I0813 21:12:03.179567  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Getting domain xml...
	I0813 21:12:03.181463  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Creating domain...
	I0813 21:12:03.653418  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Waiting to get IP...
	I0813 21:12:03.654770  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:03.655326  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:03.655356  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:03.655270  435593 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 21:12:03.919773  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:03.920320  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:03.920354  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:03.920243  435593 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 21:12:04.302739  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:04.303182  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:04.303216  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:04.303132  435593 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 21:12:04.727655  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:04.728164  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:04.728197  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:04.728108  435593 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 21:12:05.202828  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:05.203392  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:05.203422  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:05.203328  435593 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 21:12:05.791972  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:05.792487  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:05.792518  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:05.792429  435593 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 21:12:06.627792  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:06.628345  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:06.628381  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:06.628280  435593 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 21:12:07.375959  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:07.376395  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:07.376430  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:07.376325  435593 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 21:12:03.080800  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:05.081239  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:07.083425  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:08.106986  434036 system_pods.go:86] 7 kube-system pods found
	I0813 21:12:08.107015  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:08.107023  434036 system_pods.go:89] "etcd-old-k8s-version-20210813205952-393438" [1ad2fd16-fc7b-11eb-a3a8-525400553b5e] Pending
	I0813 21:12:08.107029  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:08.107036  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:08.107044  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:08.107056  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:08.107066  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:08.107088  434036 retry.go:31] will retry after 2.494582248s: missing components: etcd, kube-apiserver
	I0813 21:12:10.608633  434036 system_pods.go:86] 7 kube-system pods found
	I0813 21:12:10.608661  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608669  434036 system_pods.go:89] "etcd-old-k8s-version-20210813205952-393438" [1ad2fd16-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608677  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608683  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608689  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608699  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:10.608709  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:10.608730  434036 retry.go:31] will retry after 3.420895489s: missing components: kube-apiserver
	I0813 21:12:08.365541  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:08.365958  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:08.365989  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:08.365915  435593 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 21:12:09.557147  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:09.557589  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:09.557632  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:09.557523  435593 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 21:12:11.236771  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:11.237277  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:11.237308  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:11.237206  435593 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 21:12:09.582507  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:12.083780  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:14.037319  434036 system_pods.go:86] 7 kube-system pods found
	I0813 21:12:14.037360  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:14.037370  434036 system_pods.go:89] "etcd-old-k8s-version-20210813205952-393438" [1ad2fd16-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:14.037399  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:14.037408  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:14.037426  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:14.037441  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:14.037451  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:14.037477  434036 retry.go:31] will retry after 4.133785681s: missing components: kube-apiserver
	I0813 21:12:13.584611  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:13.585577  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:13.585609  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:13.585520  435593 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 21:12:16.954772  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:16.955272  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find current IP address of domain newest-cni-20210813211202-393438 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:16.955315  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | I0813 21:12:16.955206  435593 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 21:12:14.580811  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:16.585503  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:18.176738  434036 system_pods.go:86] 8 kube-system pods found
	I0813 21:12:18.176772  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:18.176780  434036 system_pods.go:89] "etcd-old-k8s-version-20210813205952-393438" [1ad2fd16-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:18.176790  434036 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205952-393438" [22920482-fc7b-11eb-a3a8-525400553b5e] Pending
	I0813 21:12:18.176797  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:18.176804  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:18.176810  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:18.176822  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:18.176829  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:18.176852  434036 retry.go:31] will retry after 5.595921491s: missing components: kube-apiserver
	I0813 21:12:20.075328  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.075791  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has current primary IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.075813  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Found IP for machine: 192.168.61.119
	I0813 21:12:20.075825  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Reserving static IP address...
	I0813 21:12:20.076105  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | unable to find host DHCP lease matching {name: "newest-cni-20210813211202-393438", mac: "52:54:00:cc:cf:c7", ip: "192.168.61.119"} in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.125588  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Getting to WaitForSSH function...
	I0813 21:12:20.125623  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Reserved static IP address: 192.168.61.119
	I0813 21:12:20.125669  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Waiting for SSH to be available...
	I0813 21:12:20.130696  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.131130  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:20.131161  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.131287  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Using SSH client type: external
	I0813 21:12:20.131317  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa (-rw-------)
	I0813 21:12:20.131353  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:12:20.131369  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | About to run SSH command:
	I0813 21:12:20.131393  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | exit 0
	I0813 21:12:20.262037  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 21:12:20.262563  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) KVM machine creation complete!
	I0813 21:12:20.262615  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetConfigRaw
	I0813 21:12:20.263261  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:12:20.263466  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:12:20.263609  435569 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 21:12:20.263623  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetState
	I0813 21:12:20.266229  435569 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 21:12:20.266253  435569 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 21:12:20.266259  435569 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 21:12:20.266266  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:12:20.270629  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.270976  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:20.271008  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.271149  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:12:20.271321  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:20.271557  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:20.271686  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:12:20.271848  435569 main.go:130] libmachine: Using SSH client type: native
	I0813 21:12:20.272067  435569 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I0813 21:12:20.272081  435569 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 21:12:20.394964  435569 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:12:20.394992  435569 main.go:130] libmachine: Detecting the provisioner...
	I0813 21:12:20.395004  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:12:20.400454  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.400801  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:20.400833  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.400989  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:12:20.401169  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:20.401336  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:20.401506  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:12:20.401664  435569 main.go:130] libmachine: Using SSH client type: native
	I0813 21:12:20.401823  435569 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I0813 21:12:20.401839  435569 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 21:12:20.523219  435569 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 21:12:20.523287  435569 main.go:130] libmachine: found compatible host: buildroot
	I0813 21:12:20.523306  435569 main.go:130] libmachine: Provisioning with buildroot...
	I0813 21:12:20.523317  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetMachineName
	I0813 21:12:20.523602  435569 buildroot.go:166] provisioning hostname "newest-cni-20210813211202-393438"
	I0813 21:12:20.523626  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetMachineName
	I0813 21:12:20.523839  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:12:20.528756  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.529088  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:20.529116  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.529249  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:12:20.529408  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:20.529530  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:20.529701  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:12:20.529877  435569 main.go:130] libmachine: Using SSH client type: native
	I0813 21:12:20.530053  435569 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I0813 21:12:20.530070  435569 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210813211202-393438 && echo "newest-cni-20210813211202-393438" | sudo tee /etc/hostname
	I0813 21:12:20.658694  435569 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210813211202-393438
	
	I0813 21:12:20.658723  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:12:20.663833  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.664170  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:20.664208  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.664366  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:12:20.664521  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:20.664678  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:20.664795  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:12:20.664961  435569 main.go:130] libmachine: Using SSH client type: native
	I0813 21:12:20.665135  435569 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I0813 21:12:20.665165  435569 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210813211202-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210813211202-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210813211202-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:12:20.788855  435569 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:12:20.788885  435569 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:12:20.788941  435569 buildroot.go:174] setting up certificates
	I0813 21:12:20.788957  435569 provision.go:83] configureAuth start
	I0813 21:12:20.788973  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetMachineName
	I0813 21:12:20.789210  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetIP
	I0813 21:12:20.794122  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.794498  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:20.794531  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.794605  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:12:20.798835  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.799147  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:20.799179  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.799277  435569 provision.go:138] copyHostCerts
	I0813 21:12:20.799351  435569 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:12:20.799364  435569 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:12:20.799430  435569 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:12:20.799517  435569 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:12:20.799528  435569 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:12:20.799559  435569 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:12:20.799633  435569 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:12:20.799643  435569 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:12:20.799675  435569 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 21:12:20.799724  435569 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210813211202-393438 san=[192.168.61.119 192.168.61.119 localhost 127.0.0.1 minikube newest-cni-20210813211202-393438]
	I0813 21:12:20.970082  435569 provision.go:172] copyRemoteCerts
	I0813 21:12:20.970136  435569 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:12:20.970160  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:12:20.975007  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.975367  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:20.975395  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:20.975529  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:12:20.975686  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:20.975824  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:12:20.975934  435569 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa Username:docker}
	I0813 21:12:21.061503  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:12:21.078589  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 21:12:21.096554  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 21:12:21.113831  435569 provision.go:86] duration metric: configureAuth took 324.861657ms
	I0813 21:12:21.113858  435569 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:12:21.114067  435569 config.go:177] Loaded profile config "newest-cni-20210813211202-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:12:21.114097  435569 main.go:130] libmachine: Checking connection to Docker...
	I0813 21:12:21.114120  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetURL
	I0813 21:12:21.116783  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Using libvirt version 3000000
	I0813 21:12:21.121634  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.122007  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:21.122039  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.122104  435569 main.go:130] libmachine: Docker is up and running!
	I0813 21:12:21.122119  435569 main.go:130] libmachine: Reticulating splines...
	I0813 21:12:21.122132  435569 client.go:171] LocalClient.Create took 18.566912976s
	I0813 21:12:21.122154  435569 start.go:168] duration metric: libmachine.API.Create for "newest-cni-20210813211202-393438" took 18.566978181s
	I0813 21:12:21.122169  435569 start.go:267] post-start starting for "newest-cni-20210813211202-393438" (driver="kvm2")
	I0813 21:12:21.122177  435569 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:12:21.122206  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:12:21.122408  435569 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:12:21.122438  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:12:21.126875  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.127119  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:21.127147  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.127277  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:12:21.127450  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:21.127585  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:12:21.127719  435569 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa Username:docker}
	I0813 21:12:21.213628  435569 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:12:21.218248  435569 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:12:21.218313  435569 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:12:21.218385  435569 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:12:21.218490  435569 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 21:12:21.218597  435569 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:12:21.225131  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:12:21.241899  435569 start.go:270] post-start completed in 119.718112ms
	I0813 21:12:21.241963  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetConfigRaw
	I0813 21:12:21.242476  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetIP
	I0813 21:12:21.248001  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.248352  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:21.248392  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.248647  435569 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/config.json ...
	I0813 21:12:21.248803  435569 start.go:129] duration metric: createHost completed in 18.708251047s
	I0813 21:12:21.248816  435569 start.go:80] releasing machines lock for "newest-cni-20210813211202-393438", held for 18.708379739s
	I0813 21:12:21.248862  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:12:21.249020  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetIP
	I0813 21:12:21.253354  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.253664  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:21.253697  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.253785  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:12:21.253953  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:12:21.254412  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:12:21.254583  435569 ssh_runner.go:149] Run: systemctl --version
	I0813 21:12:21.254605  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:12:21.254694  435569 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:12:21.254749  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:12:21.259646  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.259959  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:21.259989  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.260218  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:12:21.260394  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:21.260543  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:12:21.260679  435569 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa Username:docker}
	I0813 21:12:21.260864  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.261145  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:12:21.261182  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:12:21.261369  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:12:21.261544  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:12:21.261735  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:12:21.261888  435569 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa Username:docker}
	I0813 21:12:21.345053  435569 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:12:21.345167  435569 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:12:19.081966  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:21.579908  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:23.778836  434036 system_pods.go:86] 8 kube-system pods found
	I0813 21:12:23.778863  434036 system_pods.go:89] "coredns-fb8b8dccf-vlm5d" [fea5b365-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:23.778870  434036 system_pods.go:89] "etcd-old-k8s-version-20210813205952-393438" [1ad2fd16-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:23.778874  434036 system_pods.go:89] "kube-apiserver-old-k8s-version-20210813205952-393438" [22920482-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:23.778879  434036 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210813205952-393438" [160f7f1f-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:23.778883  434036 system_pods.go:89] "kube-proxy-zqww7" [fe91b2c2-fc7a-11eb-a3a8-525400553b5e] Running
	I0813 21:12:23.778887  434036 system_pods.go:89] "kube-scheduler-old-k8s-version-20210813205952-393438" [16a63c68-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:23.778895  434036 system_pods.go:89] "metrics-server-8546d8b77b-xv8fc" [0111f547-fc7b-11eb-a3a8-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:12:23.778905  434036 system_pods.go:89] "storage-provisioner" [008cc472-fc7b-11eb-a3a8-525400553b5e] Running
	I0813 21:12:23.778912  434036 system_pods.go:126] duration metric: took 26.190678805s to wait for k8s-apps to be running ...
	I0813 21:12:23.778922  434036 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:12:23.778967  434036 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:23.790399  434036 system_svc.go:56] duration metric: took 11.46131ms WaitForService to wait for kubelet.
	I0813 21:12:23.790429  434036 kubeadm.go:547] duration metric: took 1m7.317361218s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:12:23.790459  434036 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:12:23.794772  434036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:12:23.794799  434036 node_conditions.go:123] node cpu capacity is 2
	I0813 21:12:23.794815  434036 node_conditions.go:105] duration metric: took 4.350564ms to run NodePressure ...
	I0813 21:12:23.794829  434036 start.go:231] waiting for startup goroutines ...
	I0813 21:12:23.840737  434036 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0813 21:12:23.842756  434036 out.go:177] 
	W0813 21:12:23.842917  434036 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0813 21:12:23.844481  434036 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:12:23.845911  434036 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20210813205952-393438" cluster and "default" namespace by default
	I0813 21:12:25.347373  435569 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.002178644s)
	I0813 21:12:25.347499  435569 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0813 21:12:25.347580  435569 ssh_runner.go:149] Run: which lz4
	I0813 21:12:25.352331  435569 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:12:25.356814  435569 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:12:25.356848  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (945588089 bytes)
	I0813 21:12:23.579972  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:25.582619  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:27.584933  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	e6c135b981d86       523cad1a4df73       19 seconds ago      Exited              dashboard-metrics-scraper   1                   bdd7ecd405185
	13422228dfbf2       9a07b5b4bfac0       26 seconds ago      Running             kubernetes-dashboard        0                   4ce0cc7ececda
	9e8933c4e874c       6e38f40d628db       27 seconds ago      Running             storage-provisioner         0                   fc1b3e504b699
	db7a83df618b3       296a6d5035e2d       31 seconds ago      Running             coredns                     0                   d9b8ac5fbc2b5
	bb9a649072a37       adb2816ea823a       34 seconds ago      Running             kube-proxy                  0                   f06a4e1dc0256
	f1064867a5630       6be0dc1302e30       57 seconds ago      Running             kube-scheduler              0                   cebfd1f671bb7
	7426296452443       3d174f00aa39e       57 seconds ago      Running             kube-apiserver              0                   d9bd2c2bce360
	ec76c816427ad       0369cf4303ffd       57 seconds ago      Running             etcd                        0                   57df03b29f44a
	e397d877bd7e0       bc2bb319a7038       57 seconds ago      Running             kube-controller-manager     0                   5b72edbb20bb5
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 21:05:03 UTC, end at Fri 2021-08-13 21:12:28 UTC. --
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:07.716480180Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:07.718775772Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:07.721790397Z" level=info msg="CreateContainer within sandbox \"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:07.808539946Z" level=info msg="CreateContainer within sandbox \"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\""
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:07.810868440Z" level=info msg="StartContainer for \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\""
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.225107350Z" level=info msg="StartContainer for \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\" returns successfully"
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.258914743Z" level=info msg="Finish piping stderr of container \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\""
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.259059239Z" level=info msg="Finish piping stdout of container \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\""
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.260874665Z" level=info msg="TaskExit event &TaskExit{ContainerID:c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c,ID:c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c,Pid:6967,ExitStatus:1,ExitedAt:2021-08-13 21:12:08.260536292 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.324543251Z" level=info msg="shim disconnected" id=c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.324854883Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.898599333Z" level=info msg="CreateContainer within sandbox \"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.946483274Z" level=info msg="CreateContainer within sandbox \"bdd7ecd405185b04473fbc3fd25e12780d841b5ee9e0dc78fa47325e6711f6b7\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0\""
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:08.947822775Z" level=info msg="StartContainer for \"e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0\""
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.355642674Z" level=info msg="StartContainer for \"e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0\" returns successfully"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.387808603Z" level=info msg="Finish piping stderr of container \"e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0\""
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.388374499Z" level=info msg="Finish piping stdout of container \"e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0\""
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.390381331Z" level=info msg="TaskExit event &TaskExit{ContainerID:e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0,ID:e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0,Pid:7034,ExitStatus:1,ExitedAt:2021-08-13 21:12:09.389893877 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.454322140Z" level=info msg="shim disconnected" id=e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.454487960Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.917164292Z" level=info msg="RemoveContainer for \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\""
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:09.936096229Z" level=info msg="RemoveContainer for \"c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c\" returns successfully"
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:11.314634042Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:11.319194800Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 containerd[2156]: time="2021-08-13T21:12:11.323879712Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> coredns [db7a83df618b3ea293c6b7bf50ecbd657ba0dffa31d3b88a503cb689771f0fd5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	*                 "trace_clock=local"
	              on the kernel command line
	[Aug13 21:05] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.044495] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.002266] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1717 comm=systemd-network
	[  +0.593568] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.253024] vboxguest: loading out-of-tree module taints kernel.
	[  +0.025899] vboxguest: PCI device not found, probably running on physical hardware.
	[ +21.111329] systemd-fstab-generator[2069]: Ignoring "noauto" for root device
	[  +0.260243] systemd-fstab-generator[2100]: Ignoring "noauto" for root device
	[  +0.170967] systemd-fstab-generator[2115]: Ignoring "noauto" for root device
	[  +0.251334] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
	[  +6.496828] systemd-fstab-generator[2336]: Ignoring "noauto" for root device
	[Aug13 21:07] NFSD: Unable to end grace period: -110
	[  +3.235987] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.143742] kauditd_printk_skb: 101 callbacks suppressed
	[Aug13 21:11] systemd-fstab-generator[5170]: Ignoring "noauto" for root device
	[ +16.431282] systemd-fstab-generator[5588]: Ignoring "noauto" for root device
	[ +14.178939] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.451249] kauditd_printk_skb: 77 callbacks suppressed
	[Aug13 21:12] kauditd_printk_skb: 50 callbacks suppressed
	[  +4.942031] systemd-fstab-generator[7088]: Ignoring "noauto" for root device
	[  +0.799655] systemd-fstab-generator[7141]: Ignoring "noauto" for root device
	[  +0.986899] systemd-fstab-generator[7195]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [ec76c816427ad33994b2055617e82d025e6d6ca3d54b5738666193855befdd22] <==
	* raft2021/08/13 21:11:31 INFO: a8a86752a40bcef4 switched to configuration voters=(12153077199096499956)
	2021-08-13 21:11:31.801445 W | auth: simple token is not cryptographically signed
	2021-08-13 21:11:31.828106 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-13 21:11:31.849472 I | etcdserver: a8a86752a40bcef4 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 21:11:31 INFO: a8a86752a40bcef4 switched to configuration voters=(12153077199096499956)
	2021-08-13 21:11:31.861855 I | etcdserver/membership: added member a8a86752a40bcef4 [https://192.168.39.163:2380] to cluster e373eafcd5903e51
	2021-08-13 21:11:31.886397 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 21:11:31.886568 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 21:11:31.886667 I | embed: listening for peers on 192.168.39.163:2380
	raft2021/08/13 21:11:32 INFO: a8a86752a40bcef4 is starting a new election at term 1
	raft2021/08/13 21:11:32 INFO: a8a86752a40bcef4 became candidate at term 2
	raft2021/08/13 21:11:32 INFO: a8a86752a40bcef4 received MsgVoteResp from a8a86752a40bcef4 at term 2
	raft2021/08/13 21:11:32 INFO: a8a86752a40bcef4 became leader at term 2
	raft2021/08/13 21:11:32 INFO: raft.node: a8a86752a40bcef4 elected leader a8a86752a40bcef4 at term 2
	2021-08-13 21:11:32.295394 I | etcdserver: published {Name:default-k8s-different-port-20210813210121-393438 ClientURLs:[https://192.168.39.163:2379]} to cluster e373eafcd5903e51
	2021-08-13 21:11:32.298167 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 21:11:32.298474 I | embed: ready to serve client requests
	2021-08-13 21:11:32.302408 I | embed: ready to serve client requests
	2021-08-13 21:11:32.308325 I | embed: serving client requests on 192.168.39.163:2379
	2021-08-13 21:11:32.308807 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 21:11:32.325368 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 21:11:32.363691 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 21:11:55.194613 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:12:01.372556 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:12:11.369964 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  21:12:40 up 7 min,  0 users,  load average: 1.55, 0.90, 0.42
	Linux default-k8s-different-port-20210813210121-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [74262964524438b412a2f36a916beb788ae233f9ca50ee926f88d8553d9902d8] <==
	* I0813 21:11:36.963490       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 21:11:36.968089       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 21:11:37.741856       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:11:37.741968       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 21:11:37.761147       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 21:11:37.769570       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 21:11:37.769972       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:11:38.492178       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:11:38.569200       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 21:11:38.694389       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.163]
	I0813 21:11:38.695926       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 21:11:38.705719       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 21:11:39.373709       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:11:40.588382       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:11:40.678180       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:11:46.165453       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 21:11:53.802032       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 21:11:53.901687       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0813 21:12:01.126984       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 21:12:01.127288       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:12:01.127306       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 21:12:15.140803       1 client.go:360] parsed scheme: "passthrough"
	I0813 21:12:15.141000       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 21:12:15.141140       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [e397d877bd7e069ea50cce428d1c19f19871b2249b0a7166520b8113be213702] <==
	* I0813 21:11:58.192881       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-qq4n6"
	I0813 21:11:58.918615       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 21:11:58.942677       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0813 21:11:58.978107       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:58.978815       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.028598       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:59.044163       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:59.079167       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.080031       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.096027       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.096043       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.110042       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.110544       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.118079       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.118580       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.137105       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.137996       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:59.140839       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:11:59.140148       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:59.162676       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:59.162856       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:59.162883       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:59.162896       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:11:59.205811       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-mk55h"
	I0813 21:11:59.212679       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-6tdsg"
	
	* 
	* ==> kube-proxy [bb9a649072a37d5d502611f7ee44c15effc86e70188f794d8ac809ad5ff00b47] <==
	* I0813 21:11:56.070946       1 node.go:172] Successfully retrieved node IP: 192.168.39.163
	I0813 21:11:56.071157       1 server_others.go:140] Detected node IP 192.168.39.163
	W0813 21:11:56.071597       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 21:11:56.338613       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:11:56.338655       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:11:56.338673       1 server_others.go:212] Using iptables Proxier.
	I0813 21:11:56.341755       1 server.go:643] Version: v1.21.3
	I0813 21:11:56.349955       1 config.go:315] Starting service config controller
	I0813 21:11:56.350464       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 21:11:56.350752       1 config.go:224] Starting endpoint slice config controller
	I0813 21:11:56.350764       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 21:11:56.393085       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 21:11:56.405925       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 21:11:56.453407       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 21:11:56.454916       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f1064867a5630aaf15fdc2c8407dfe0d65087dd5fe344a5a840878c46e2c4054] <==
	* E0813 21:11:36.910519       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:11:36.911024       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:11:36.911407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:11:36.911982       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:11:36.912548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:36.913029       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:36.913622       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:11:36.914028       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:11:36.914350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:11:36.914853       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:11:36.915294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:36.915652       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:11:36.919151       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:37.732850       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:11:37.733622       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:11:37.840822       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:11:37.880478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:11:37.921953       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:37.981661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:11:38.050116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:11:38.064172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:11:38.124858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:11:38.179975       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:11:38.215469       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0813 21:11:40.105771       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:05:03 UTC, end at Fri 2021-08-13 21:12:40 UTC. --
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:00.426449    5597 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h6b4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,V
olumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qq4n6_kube-system(c5878f91-7def-4945-96e9-d0ffc69ebaa4): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:00.426528    5597 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qq4n6" podUID=c5878f91-7def-4945-96e9-d0ffc69ebaa4
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.530092    5597 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l9rz\" (UniqueName: \"kubernetes.io/projected/c34d77cb-c710-4c58-bda3-046fff1434c4-kube-api-access-6l9rz\") pod \"c34d77cb-c710-4c58-bda3-046fff1434c4\" (UID: \"c34d77cb-c710-4c58-bda3-046fff1434c4\") "
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.530162    5597 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c34d77cb-c710-4c58-bda3-046fff1434c4-config-volume\") pod \"c34d77cb-c710-4c58-bda3-046fff1434c4\" (UID: \"c34d77cb-c710-4c58-bda3-046fff1434c4\") "
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: W0813 21:12:00.532994    5597 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/c34d77cb-c710-4c58-bda3-046fff1434c4/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.535689    5597 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c34d77cb-c710-4c58-bda3-046fff1434c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "c34d77cb-c710-4c58-bda3-046fff1434c4" (UID: "c34d77cb-c710-4c58-bda3-046fff1434c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.558817    5597 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34d77cb-c710-4c58-bda3-046fff1434c4-kube-api-access-6l9rz" (OuterVolumeSpecName: "kube-api-access-6l9rz") pod "c34d77cb-c710-4c58-bda3-046fff1434c4" (UID: "c34d77cb-c710-4c58-bda3-046fff1434c4"). InnerVolumeSpecName "kube-api-access-6l9rz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.631004    5597 reconciler.go:319] "Volume detached for volume \"kube-api-access-6l9rz\" (UniqueName: \"kubernetes.io/projected/c34d77cb-c710-4c58-bda3-046fff1434c4-kube-api-access-6l9rz\") on node \"default-k8s-different-port-20210813210121-393438\" DevicePath \"\""
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:00.631101    5597 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c34d77cb-c710-4c58-bda3-046fff1434c4-config-volume\") on node \"default-k8s-different-port-20210813210121-393438\" DevicePath \"\""
	Aug 13 21:12:00 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:00.726859    5597 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-qq4n6" podUID=c5878f91-7def-4945-96e9-d0ffc69ebaa4
	Aug 13 21:12:07 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:07.256518    5597 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod8c6aee04-a20c-445c-835a-5dc57e81b7f5\": RecentStats: unable to find data in memory cache], [\"/kubepods/burstable/podc5878f91-7def-4945-96e9-d0ffc69ebaa4\": RecentStats: unable to find data in memory cache]"
	Aug 13 21:12:08 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:08.891908    5597 scope.go:111] "RemoveContainer" containerID="c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:09.903006    5597 scope.go:111] "RemoveContainer" containerID="c7951e1ed71fefef6788f623408ea7e3e2e9a752914a5c27750899416a9a8e0c"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:09.909072    5597 scope.go:111] "RemoveContainer" containerID="e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0"
	Aug 13 21:12:09 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:09.913551    5597 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-mk55h_kubernetes-dashboard(c4b71b47-1c44-4b09-b5ec-4a9708e68adb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-mk55h" podUID=c4b71b47-1c44-4b09-b5ec-4a9708e68adb
	Aug 13 21:12:10 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:10.910040    5597 scope.go:111] "RemoveContainer" containerID="e6c135b981d86a13280c2ed8049d04084e88fa9de9ea406e56406e011115a4a0"
	Aug 13 21:12:10 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:10.911370    5597 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-mk55h_kubernetes-dashboard(c4b71b47-1c44-4b09-b5ec-4a9708e68adb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-mk55h" podUID=c4b71b47-1c44-4b09-b5ec-4a9708e68adb
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:11.324586    5597 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:11.324809    5597 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:11.325202    5597 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h6b4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,V
olumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qq4n6_kube-system(c5878f91-7def-4945-96e9-d0ffc69ebaa4): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:12:11 default-k8s-different-port-20210813210121-393438 kubelet[5597]: E0813 21:12:11.325571    5597 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qq4n6" podUID=c5878f91-7def-4945-96e9-d0ffc69ebaa4
	Aug 13 21:12:14 default-k8s-different-port-20210813210121-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:12:14 default-k8s-different-port-20210813210121-393438 kubelet[5597]: I0813 21:12:14.210143    5597 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 21:12:14 default-k8s-different-port-20210813210121-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:12:14 default-k8s-different-port-20210813210121-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [13422228dfbf23a4715c924e81513db1073b556c592969463eb6e8059a930b55] <==
	* 2021/08/13 21:12:02 Using namespace: kubernetes-dashboard
	2021/08/13 21:12:02 Using in-cluster config to connect to apiserver
	2021/08/13 21:12:02 Using secret token for csrf signing
	2021/08/13 21:12:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:12:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:12:02 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 21:12:02 Generating JWE encryption key
	2021/08/13 21:12:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:12:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:12:02 Initializing JWE encryption key from synchronized object
	2021/08/13 21:12:02 Creating in-cluster Sidecar client
	2021/08/13 21:12:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:12:02 Serving insecurely on HTTP port: 9090
	2021/08/13 21:12:02 Starting overwatch
	
	* 
	* ==> storage-provisioner [9e8933c4e874c83f3df2f797d7ad7bf0c649215b84b1f82dc17ead87f5b0ee9e] <==
	* I0813 21:12:01.638722       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:12:01.686348       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:12:01.686672       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:12:01.705039       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:12:01.706137       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813210121-393438_b2352404-728a-4867-b701-4d7578379b03!
	I0813 21:12:01.711209       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"92d79f95-b534-4d8c-a17a-ccb7bcffd25b", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210813210121-393438_b2352404-728a-4867-b701-4d7578379b03 became leader
	I0813 21:12:01.806399       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210813210121-393438_b2352404-728a-4867-b701-4d7578379b03!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 21:12:39.055992  436061 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (26.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210813205952-393438 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-20210813205952-393438 --alsologtostderr -v=1: exit status 80 (2.610386693s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20210813205952-393438 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 21:12:40.442769  436125 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:12:40.442874  436125 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:40.442881  436125 out.go:311] Setting ErrFile to fd 2...
	I0813 21:12:40.442885  436125 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:40.443088  436125 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:12:40.443322  436125 out.go:305] Setting JSON to false
	I0813 21:12:40.443350  436125 mustload.go:65] Loading cluster: old-k8s-version-20210813205952-393438
	I0813 21:12:40.443739  436125 config.go:177] Loaded profile config "old-k8s-version-20210813205952-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 21:12:40.444244  436125 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:12:40.444306  436125 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:40.459696  436125 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40455
	I0813 21:12:40.460210  436125 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:40.460912  436125 main.go:130] libmachine: Using API Version  1
	I0813 21:12:40.460939  436125 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:40.461365  436125 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:40.461575  436125 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetState
	I0813 21:12:40.465162  436125 host.go:66] Checking if "old-k8s-version-20210813205952-393438" exists ...
	I0813 21:12:40.465623  436125 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:12:40.465666  436125 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:40.479518  436125 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44039
	I0813 21:12:40.480021  436125 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:40.480510  436125 main.go:130] libmachine: Using API Version  1
	I0813 21:12:40.480540  436125 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:40.480906  436125 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:40.481082  436125 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .DriverName
	I0813 21:12:40.481778  436125 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-20210813205952-393438 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 21:12:40.484451  436125 out.go:177] * Pausing node old-k8s-version-20210813205952-393438 ... 
	I0813 21:12:40.484475  436125 host.go:66] Checking if "old-k8s-version-20210813205952-393438" exists ...
	I0813 21:12:40.484875  436125 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:12:40.484919  436125 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:12:40.498111  436125 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0813 21:12:40.498627  436125 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:12:40.499224  436125 main.go:130] libmachine: Using API Version  1
	I0813 21:12:40.499253  436125 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:12:40.499680  436125 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:12:40.499895  436125 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .DriverName
	I0813 21:12:40.500121  436125 ssh_runner.go:149] Run: systemctl --version
	I0813 21:12:40.500153  436125 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHHostname
	I0813 21:12:40.506582  436125 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:12:40.506966  436125 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:3b:5e", ip: ""} in network mk-old-k8s-version-20210813205952-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:04:38 +0000 UTC Type:0 Mac:52:54:00:55:3b:5e Iaid: IPaddr:192.168.83.180 Prefix:24 Hostname:old-k8s-version-20210813205952-393438 Clientid:01:52:54:00:55:3b:5e}
	I0813 21:12:40.507009  436125 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) DBG | domain old-k8s-version-20210813205952-393438 has defined IP address 192.168.83.180 and MAC address 52:54:00:55:3b:5e in network mk-old-k8s-version-20210813205952-393438
	I0813 21:12:40.507175  436125 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHPort
	I0813 21:12:40.507387  436125 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHKeyPath
	I0813 21:12:40.507576  436125 main.go:130] libmachine: (old-k8s-version-20210813205952-393438) Calling .GetSSHUsername
	I0813 21:12:40.507741  436125 sshutil.go:53] new ssh client: &{IP:192.168.83.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/old-k8s-version-20210813205952-393438/id_rsa Username:docker}
	I0813 21:12:40.624458  436125 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:40.636763  436125 pause.go:50] kubelet running: true
	I0813 21:12:40.636817  436125 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:12:40.897938  436125 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:12:40.898069  436125 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:12:41.099692  436125 cri.go:76] found id: "2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed"
	I0813 21:12:41.099723  436125 cri.go:76] found id: "7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00"
	I0813 21:12:41.099730  436125 cri.go:76] found id: "0292daffd3bf6a4ddd0f4384fce377799efbf1e96de28aeb16cba46af2c1be35"
	I0813 21:12:41.099735  436125 cri.go:76] found id: "404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0"
	I0813 21:12:41.099741  436125 cri.go:76] found id: "e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134"
	I0813 21:12:41.099748  436125 cri.go:76] found id: "dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689"
	I0813 21:12:41.099755  436125 cri.go:76] found id: "087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e"
	I0813 21:12:41.099762  436125 cri.go:76] found id: "489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c"
	I0813 21:12:41.099769  436125 cri.go:76] found id: "c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5"
	I0813 21:12:41.099780  436125 cri.go:76] found id: "06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd"
	I0813 21:12:41.099789  436125 cri.go:76] found id: ""
	I0813 21:12:41.099839  436125 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:12:41.148324  436125 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd","pid":8039,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd/rootfs","created":"2021-08-13T21:11:20.730589518Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e","pid":6582,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e","rootfs":"/run/containerd/io.containe
rd.runtime.v2.task/k8s.io/087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e/rootfs","created":"2021-08-13T21:10:49.892436809Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24","pid":7961,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24/rootfs","created":"2021-08-13T21:11:20.445595897Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/ku
bernetes-dashboard_kubernetes-dashboard-5d8978d65d-7pkrv_0127869d-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed","pid":8323,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed/rootfs","created":"2021-08-13T21:11:47.369919069Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f","pid":7202,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97
f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f/rootfs","created":"2021-08-13T21:11:16.369198856Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-fb8b8dccf-vlm5d_fea5b365-fc7a-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b","pid":6991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b/rootfs","created":"2021-08-13T21:11:15.744806218Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2d648249
cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-zqww7_fe91b2c2-fc7a-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0","pid":7143,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0/rootfs","created":"2021-08-13T21:11:16.202157862Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c","pid":6520,"status":"running","bundle":"/run/containerd/io.contain
erd.runtime.v2.task/k8s.io/489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c/rootfs","created":"2021-08-13T21:10:49.773944983Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021","pid":7970,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021/rootfs","created":"2021-08-13T21:11:20.477972667Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-
id":"4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-xv8fc_0111f547-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8","pid":6456,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8/rootfs","created":"2021-08-13T21:10:49.57508219Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210813205952-393438_b42c3e5fa8e81a5a78a3a372f8953126"},"owner":"root"},{"ociVersion":"1.0.2-d
ev","id":"5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64","pid":6436,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64/rootfs","created":"2021-08-13T21:10:49.586428702Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210813205952-393438_ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00","pid":7749,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00","rootfs":"/run/contain
erd/io.containerd.runtime.v2.task/k8s.io/7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00/rootfs","created":"2021-08-13T21:11:20.01837186Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b","pid":7650,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b/rootfs","created":"2021-08-13T21:11:19.450247434Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b","io.kubernetes.cri.sandbox-log-directory
":"/var/log/pods/kube-system_storage-provisioner_008cc472-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0","pid":6446,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0/rootfs","created":"2021-08-13T21:10:49.569414012Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210813205952-393438_1328ee8839cff8059abc45c14aaf53df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b","pid":6403,"status":"running","bundle":"/run/containerd/io.contai
nerd.runtime.v2.task/k8s.io/d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b/rootfs","created":"2021-08-13T21:10:49.434162058Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210813205952-393438_0d60c9c2baf7847801d6bb7e8bf52dfa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689","pid":6599,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689/rootfs","created":"2021-08-13T21:10
:50.015455918Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134","pid":6642,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134/rootfs","created":"2021-08-13T21:10:50.968800021Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42","pid":7995,"statu
s":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42/rootfs","created":"2021-08-13T21:11:20.559835819Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-dws7v_0127056f-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"}]
	I0813 21:12:41.148588  436125 cri.go:113] list returned 18 containers
	I0813 21:12:41.148608  436125 cri.go:116] container: {ID:06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd Status:running}
	I0813 21:12:41.148636  436125 cri.go:116] container: {ID:087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e Status:running}
	I0813 21:12:41.148645  436125 cri.go:116] container: {ID:1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24 Status:running}
	I0813 21:12:41.148652  436125 cri.go:118] skipping 1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24 - not in ps
	I0813 21:12:41.148661  436125 cri.go:116] container: {ID:2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed Status:running}
	I0813 21:12:41.148669  436125 cri.go:116] container: {ID:2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f Status:running}
	I0813 21:12:41.148679  436125 cri.go:118] skipping 2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f - not in ps
	I0813 21:12:41.148685  436125 cri.go:116] container: {ID:2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b Status:running}
	I0813 21:12:41.148697  436125 cri.go:118] skipping 2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b - not in ps
	I0813 21:12:41.148703  436125 cri.go:116] container: {ID:404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0 Status:running}
	I0813 21:12:41.148710  436125 cri.go:116] container: {ID:489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c Status:running}
	I0813 21:12:41.148717  436125 cri.go:116] container: {ID:4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021 Status:running}
	I0813 21:12:41.148725  436125 cri.go:118] skipping 4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021 - not in ps
	I0813 21:12:41.148730  436125 cri.go:116] container: {ID:4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8 Status:running}
	I0813 21:12:41.148736  436125 cri.go:118] skipping 4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8 - not in ps
	I0813 21:12:41.148741  436125 cri.go:116] container: {ID:5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64 Status:running}
	I0813 21:12:41.148751  436125 cri.go:118] skipping 5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64 - not in ps
	I0813 21:12:41.148756  436125 cri.go:116] container: {ID:7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00 Status:running}
	I0813 21:12:41.148766  436125 cri.go:116] container: {ID:8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b Status:running}
	I0813 21:12:41.148777  436125 cri.go:118] skipping 8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b - not in ps
	I0813 21:12:41.148789  436125 cri.go:116] container: {ID:cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0 Status:running}
	I0813 21:12:41.148797  436125 cri.go:118] skipping cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0 - not in ps
	I0813 21:12:41.148802  436125 cri.go:116] container: {ID:d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b Status:running}
	I0813 21:12:41.148813  436125 cri.go:118] skipping d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b - not in ps
	I0813 21:12:41.148819  436125 cri.go:116] container: {ID:dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689 Status:running}
	I0813 21:12:41.148828  436125 cri.go:116] container: {ID:e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134 Status:running}
	I0813 21:12:41.148839  436125 cri.go:116] container: {ID:eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42 Status:running}
	I0813 21:12:41.148864  436125 cri.go:118] skipping eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42 - not in ps
	I0813 21:12:41.148922  436125 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd
	I0813 21:12:41.177428  436125 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd 087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e
	I0813 21:12:41.200497  436125 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd 087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:41Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:12:41.476872  436125 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:41.491080  436125 pause.go:50] kubelet running: false
	I0813 21:12:41.491150  436125 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:12:41.685781  436125 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:12:41.685880  436125 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:12:41.833568  436125 cri.go:76] found id: "2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed"
	I0813 21:12:41.833601  436125 cri.go:76] found id: "7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00"
	I0813 21:12:41.833608  436125 cri.go:76] found id: "0292daffd3bf6a4ddd0f4384fce377799efbf1e96de28aeb16cba46af2c1be35"
	I0813 21:12:41.833613  436125 cri.go:76] found id: "404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0"
	I0813 21:12:41.833619  436125 cri.go:76] found id: "e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134"
	I0813 21:12:41.833625  436125 cri.go:76] found id: "dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689"
	I0813 21:12:41.833635  436125 cri.go:76] found id: "087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e"
	I0813 21:12:41.833648  436125 cri.go:76] found id: "489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c"
	I0813 21:12:41.833655  436125 cri.go:76] found id: "c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5"
	I0813 21:12:41.833664  436125 cri.go:76] found id: "06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd"
	I0813 21:12:41.833669  436125 cri.go:76] found id: ""
	I0813 21:12:41.833713  436125 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:12:41.874059  436125 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd","pid":8039,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd/rootfs","created":"2021-08-13T21:11:20.730589518Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e","pid":6582,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e","rootfs":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e/rootfs","created":"2021-08-13T21:10:49.892436809Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24","pid":7961,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24/rootfs","created":"2021-08-13T21:11:20.445595897Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kub
ernetes-dashboard_kubernetes-dashboard-5d8978d65d-7pkrv_0127869d-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed","pid":8323,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed/rootfs","created":"2021-08-13T21:11:47.369919069Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f","pid":7202,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f
","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f/rootfs","created":"2021-08-13T21:11:16.369198856Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-fb8b8dccf-vlm5d_fea5b365-fc7a-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b","pid":6991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b/rootfs","created":"2021-08-13T21:11:15.744806218Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2d648249c
d7af85d0557c613f7024993543a74a940b578b7e17883002687c81b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-zqww7_fe91b2c2-fc7a-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0","pid":7143,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0/rootfs","created":"2021-08-13T21:11:16.202157862Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c","pid":6520,"status":"running","bundle":"/run/containerd/io.containe
rd.runtime.v2.task/k8s.io/489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c/rootfs","created":"2021-08-13T21:10:49.773944983Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021","pid":7970,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021/rootfs","created":"2021-08-13T21:11:20.477972667Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-i
d":"4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-xv8fc_0111f547-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8","pid":6456,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8/rootfs","created":"2021-08-13T21:10:49.57508219Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210813205952-393438_b42c3e5fa8e81a5a78a3a372f8953126"},"owner":"root"},{"ociVersion":"1.0.2-de
v","id":"5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64","pid":6436,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64/rootfs","created":"2021-08-13T21:10:49.586428702Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210813205952-393438_ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00","pid":7749,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00","rootfs":"/run/containe
rd/io.containerd.runtime.v2.task/k8s.io/7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00/rootfs","created":"2021-08-13T21:11:20.01837186Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b","pid":7650,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b/rootfs","created":"2021-08-13T21:11:19.450247434Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b","io.kubernetes.cri.sandbox-log-directory"
:"/var/log/pods/kube-system_storage-provisioner_008cc472-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0","pid":6446,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0/rootfs","created":"2021-08-13T21:10:49.569414012Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210813205952-393438_1328ee8839cff8059abc45c14aaf53df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b","pid":6403,"status":"running","bundle":"/run/containerd/io.contain
erd.runtime.v2.task/k8s.io/d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b/rootfs","created":"2021-08-13T21:10:49.434162058Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210813205952-393438_0d60c9c2baf7847801d6bb7e8bf52dfa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689","pid":6599,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689/rootfs","created":"2021-08-13T21:10:
50.015455918Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134","pid":6642,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134/rootfs","created":"2021-08-13T21:10:50.968800021Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42","pid":7995,"status
":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42/rootfs","created":"2021-08-13T21:11:20.559835819Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-dws7v_0127056f-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"}]
	I0813 21:12:41.874369  436125 cri.go:113] list returned 18 containers
	I0813 21:12:41.874391  436125 cri.go:116] container: {ID:06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd Status:paused}
	I0813 21:12:41.874407  436125 cri.go:122] skipping {06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd paused}: state = "paused", want "running"
	I0813 21:12:41.874420  436125 cri.go:116] container: {ID:087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e Status:running}
	I0813 21:12:41.874427  436125 cri.go:116] container: {ID:1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24 Status:running}
	I0813 21:12:41.874434  436125 cri.go:118] skipping 1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24 - not in ps
	I0813 21:12:41.874444  436125 cri.go:116] container: {ID:2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed Status:running}
	I0813 21:12:41.874451  436125 cri.go:116] container: {ID:2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f Status:running}
	I0813 21:12:41.874467  436125 cri.go:118] skipping 2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f - not in ps
	I0813 21:12:41.874476  436125 cri.go:116] container: {ID:2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b Status:running}
	I0813 21:12:41.874486  436125 cri.go:118] skipping 2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b - not in ps
	I0813 21:12:41.874494  436125 cri.go:116] container: {ID:404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0 Status:running}
	I0813 21:12:41.874499  436125 cri.go:116] container: {ID:489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c Status:running}
	I0813 21:12:41.874506  436125 cri.go:116] container: {ID:4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021 Status:running}
	I0813 21:12:41.874511  436125 cri.go:118] skipping 4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021 - not in ps
	I0813 21:12:41.874515  436125 cri.go:116] container: {ID:4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8 Status:running}
	I0813 21:12:41.874525  436125 cri.go:118] skipping 4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8 - not in ps
	I0813 21:12:41.874530  436125 cri.go:116] container: {ID:5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64 Status:running}
	I0813 21:12:41.874540  436125 cri.go:118] skipping 5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64 - not in ps
	I0813 21:12:41.874546  436125 cri.go:116] container: {ID:7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00 Status:running}
	I0813 21:12:41.874555  436125 cri.go:116] container: {ID:8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b Status:running}
	I0813 21:12:41.874563  436125 cri.go:118] skipping 8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b - not in ps
	I0813 21:12:41.874569  436125 cri.go:116] container: {ID:cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0 Status:running}
	I0813 21:12:41.874577  436125 cri.go:118] skipping cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0 - not in ps
	I0813 21:12:41.874581  436125 cri.go:116] container: {ID:d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b Status:running}
	I0813 21:12:41.874585  436125 cri.go:118] skipping d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b - not in ps
	I0813 21:12:41.874589  436125 cri.go:116] container: {ID:dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689 Status:running}
	I0813 21:12:41.874593  436125 cri.go:116] container: {ID:e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134 Status:running}
	I0813 21:12:41.874599  436125 cri.go:116] container: {ID:eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42 Status:running}
	I0813 21:12:41.874603  436125 cri.go:118] skipping eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42 - not in ps
	I0813 21:12:41.874647  436125 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e
	I0813 21:12:41.896887  436125 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e 2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed
	I0813 21:12:41.925448  436125 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e 2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:41Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:12:42.465792  436125 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:12:42.478805  436125 pause.go:50] kubelet running: false
	I0813 21:12:42.478858  436125 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:12:42.721999  436125 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:12:42.722124  436125 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:12:42.877613  436125 cri.go:76] found id: "2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed"
	I0813 21:12:42.877645  436125 cri.go:76] found id: "7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00"
	I0813 21:12:42.877651  436125 cri.go:76] found id: "0292daffd3bf6a4ddd0f4384fce377799efbf1e96de28aeb16cba46af2c1be35"
	I0813 21:12:42.877655  436125 cri.go:76] found id: "404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0"
	I0813 21:12:42.877658  436125 cri.go:76] found id: "e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134"
	I0813 21:12:42.877662  436125 cri.go:76] found id: "dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689"
	I0813 21:12:42.877667  436125 cri.go:76] found id: "087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e"
	I0813 21:12:42.877672  436125 cri.go:76] found id: "489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c"
	I0813 21:12:42.877678  436125 cri.go:76] found id: "c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5"
	I0813 21:12:42.877691  436125 cri.go:76] found id: "06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd"
	I0813 21:12:42.877701  436125 cri.go:76] found id: ""
	I0813 21:12:42.877747  436125 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:12:42.931782  436125 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd","pid":8039,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd/rootfs","created":"2021-08-13T21:11:20.730589518Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e","pid":6582,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e","rootfs":"/run/containerd/io.containerd
.runtime.v2.task/k8s.io/087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e/rootfs","created":"2021-08-13T21:10:49.892436809Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24","pid":7961,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24/rootfs","created":"2021-08-13T21:11:20.445595897Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube
rnetes-dashboard_kubernetes-dashboard-5d8978d65d-7pkrv_0127869d-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed","pid":8323,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed/rootfs","created":"2021-08-13T21:11:47.369919069Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f","pid":7202,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f"
,"rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f/rootfs","created":"2021-08-13T21:11:16.369198856Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-fb8b8dccf-vlm5d_fea5b365-fc7a-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b","pid":6991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b/rootfs","created":"2021-08-13T21:11:15.744806218Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2d648249cd
7af85d0557c613f7024993543a74a940b578b7e17883002687c81b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-zqww7_fe91b2c2-fc7a-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0","pid":7143,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0/rootfs","created":"2021-08-13T21:11:16.202157862Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c","pid":6520,"status":"running","bundle":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c/rootfs","created":"2021-08-13T21:10:49.773944983Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021","pid":7970,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021/rootfs","created":"2021-08-13T21:11:20.477972667Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id
":"4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-xv8fc_0111f547-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8","pid":6456,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8/rootfs","created":"2021-08-13T21:10:49.57508219Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210813205952-393438_b42c3e5fa8e81a5a78a3a372f8953126"},"owner":"root"},{"ociVersion":"1.0.2-dev
","id":"5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64","pid":6436,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64/rootfs","created":"2021-08-13T21:10:49.586428702Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210813205952-393438_ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00","pid":7749,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00","rootfs":"/run/container
d/io.containerd.runtime.v2.task/k8s.io/7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00/rootfs","created":"2021-08-13T21:11:20.01837186Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b","pid":7650,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b/rootfs","created":"2021-08-13T21:11:19.450247434Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b","io.kubernetes.cri.sandbox-log-directory":
"/var/log/pods/kube-system_storage-provisioner_008cc472-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0","pid":6446,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0/rootfs","created":"2021-08-13T21:10:49.569414012Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210813205952-393438_1328ee8839cff8059abc45c14aaf53df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b","pid":6403,"status":"running","bundle":"/run/containerd/io.containe
rd.runtime.v2.task/k8s.io/d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b/rootfs","created":"2021-08-13T21:10:49.434162058Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210813205952-393438_0d60c9c2baf7847801d6bb7e8bf52dfa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689","pid":6599,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689/rootfs","created":"2021-08-13T21:10:5
0.015455918Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134","pid":6642,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134/rootfs","created":"2021-08-13T21:10:50.968800021Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42","pid":7995,"status"
:"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42/rootfs","created":"2021-08-13T21:11:20.559835819Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-dws7v_0127056f-fc7b-11eb-a3a8-525400553b5e"},"owner":"root"}]
	I0813 21:12:42.932072  436125 cri.go:113] list returned 18 containers
	I0813 21:12:42.932088  436125 cri.go:116] container: {ID:06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd Status:paused}
	I0813 21:12:42.932107  436125 cri.go:122] skipping {06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd paused}: state = "paused", want "running"
	I0813 21:12:42.932119  436125 cri.go:116] container: {ID:087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e Status:paused}
	I0813 21:12:42.932127  436125 cri.go:122] skipping {087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e paused}: state = "paused", want "running"
	I0813 21:12:42.932134  436125 cri.go:116] container: {ID:1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24 Status:running}
	I0813 21:12:42.932145  436125 cri.go:118] skipping 1adc61e97bc2b73c14febd3ce933f4c2cd9ebe45e694f4b28312fd42ad5aeb24 - not in ps
	I0813 21:12:42.932151  436125 cri.go:116] container: {ID:2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed Status:running}
	I0813 21:12:42.932158  436125 cri.go:116] container: {ID:2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f Status:running}
	I0813 21:12:42.932165  436125 cri.go:118] skipping 2272e1367f1cd5373f90f43fae33720d7baee0b2f590c85352b86771a952a97f - not in ps
	I0813 21:12:42.932170  436125 cri.go:116] container: {ID:2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b Status:running}
	I0813 21:12:42.932177  436125 cri.go:118] skipping 2d648249cd7af85d0557c613f7024993543a74a940b578b7e17883002687c81b - not in ps
	I0813 21:12:42.932182  436125 cri.go:116] container: {ID:404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0 Status:running}
	I0813 21:12:42.932189  436125 cri.go:116] container: {ID:489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c Status:running}
	I0813 21:12:42.932195  436125 cri.go:116] container: {ID:4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021 Status:running}
	I0813 21:12:42.932202  436125 cri.go:118] skipping 4cf6d1cebe1204e20eb7836d4e587bea216b89e06f62b8c1f2b484500e642021 - not in ps
	I0813 21:12:42.932208  436125 cri.go:116] container: {ID:4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8 Status:running}
	I0813 21:12:42.932215  436125 cri.go:118] skipping 4f286622b41c94e00f16c12ba3ed1acb3388e40d3f8ce66b69c202782d7958c8 - not in ps
	I0813 21:12:42.932220  436125 cri.go:116] container: {ID:5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64 Status:running}
	I0813 21:12:42.932227  436125 cri.go:118] skipping 5b9fabbf68980d9cc3ba7f2fc34c6ea8170aa646b517dae4428d96e75d44fa64 - not in ps
	I0813 21:12:42.932239  436125 cri.go:116] container: {ID:7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00 Status:running}
	I0813 21:12:42.932245  436125 cri.go:116] container: {ID:8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b Status:running}
	I0813 21:12:42.932256  436125 cri.go:118] skipping 8c90fbae3ef8480e73d4992163203c45f282822490211587adcd5cb95c33f33b - not in ps
	I0813 21:12:42.932264  436125 cri.go:116] container: {ID:cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0 Status:running}
	I0813 21:12:42.932274  436125 cri.go:118] skipping cad4450810105abb899a01d02f7f5a9c6f0f91476c4d1e9c47c8eb1de98c00b0 - not in ps
	I0813 21:12:42.932280  436125 cri.go:116] container: {ID:d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b Status:running}
	I0813 21:12:42.932290  436125 cri.go:118] skipping d23a6e540669592995e69cb6c9cc01b61f07cae7ce049808419c857e541b1e1b - not in ps
	I0813 21:12:42.932295  436125 cri.go:116] container: {ID:dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689 Status:running}
	I0813 21:12:42.932302  436125 cri.go:116] container: {ID:e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134 Status:running}
	I0813 21:12:42.932308  436125 cri.go:116] container: {ID:eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42 Status:running}
	I0813 21:12:42.932318  436125 cri.go:118] skipping eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42 - not in ps
	I0813 21:12:42.932370  436125 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed
	I0813 21:12:42.958592  436125 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed 404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0
	I0813 21:12:42.985002  436125 out.go:177] 
	W0813 21:12:42.985170  436125 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed 404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:42Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed 404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:12:42Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 21:12:42.985186  436125 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 21:12:42.990246  436125 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 21:12:42.991792  436125 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p old-k8s-version-20210813205952-393438 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205952-393438 -n old-k8s-version-20210813205952-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205952-393438 -n old-k8s-version-20210813205952-393438: exit status 2 (318.091142ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210813205952-393438 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20210813205952-393438 logs -n 25: (1.285541256s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:00:44 UTC | Fri, 13 Aug 2021 21:03:16 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:17 UTC | Fri, 13 Aug 2021 21:03:18 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:15 UTC | Fri, 13 Aug 2021 21:03:20 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:28 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:29 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:54 UTC | Fri, 13 Aug 2021 21:04:26 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:04:27 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:18 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:28 UTC | Fri, 13 Aug 2021 21:05:01 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:52 UTC | Fri, 13 Aug 2021 21:11:53 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                  | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:56 UTC | Fri, 13 Aug 2021 21:11:57 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:11:59 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                  | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:58 UTC | Fri, 13 Aug 2021 21:12:00 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:01 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:02 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:13 UTC | Fri, 13 Aug 2021 21:12:13 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:12:23 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:40 UTC | Fri, 13 Aug 2021 21:12:40 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:40 UTC | Fri, 13 Aug 2021 21:12:41 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:41 UTC | Fri, 13 Aug 2021 21:12:41 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:12:42
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:12:42.056773  436296 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:12:42.056840  436296 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:42.056844  436296 out.go:311] Setting ErrFile to fd 2...
	I0813 21:12:42.056847  436296 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:42.056949  436296 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:12:42.057232  436296 out.go:305] Setting JSON to false
	I0813 21:12:42.100088  436296 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6924,"bootTime":1628882238,"procs":183,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:12:42.100233  436296 start.go:121] virtualization: kvm guest
	I0813 21:12:42.102911  436296 out.go:177] * [auto-20210813205925-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:12:42.104650  436296 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:12:42.103060  436296 notify.go:169] Checking for updates...
	I0813 21:12:42.106179  436296 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:12:42.107577  436296 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:12:42.109050  436296 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:12:42.109665  436296 config.go:177] Loaded profile config "embed-certs-20210813210115-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:12:42.109826  436296 config.go:177] Loaded profile config "newest-cni-20210813211202-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:12:42.109991  436296 config.go:177] Loaded profile config "old-k8s-version-20210813205952-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 21:12:42.110047  436296 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:12:42.144281  436296 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 21:12:42.144311  436296 start.go:278] selected driver: kvm2
	I0813 21:12:42.144318  436296 start.go:751] validating driver "kvm2" against <nil>
	I0813 21:12:42.144340  436296 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:12:42.145746  436296 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:12:42.145915  436296 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:12:42.159569  436296 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:12:42.159621  436296 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 21:12:42.159763  436296 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 21:12:42.159786  436296 cni.go:93] Creating CNI manager for ""
	I0813 21:12:42.159792  436296 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:12:42.159802  436296 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 21:12:42.159835  436296 start_flags.go:277] config:
	{Name:auto-20210813205925-393438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813205925-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:12:42.159928  436296 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:12:42.161785  436296 out.go:177] * Starting control plane node auto-20210813205925-393438 in cluster auto-20210813205925-393438
	I0813 21:12:42.161816  436296 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:12:42.161847  436296 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 21:12:42.161873  436296 cache.go:56] Caching tarball of preloaded images
	I0813 21:12:42.162010  436296 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 21:12:42.162031  436296 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 21:12:42.162170  436296 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/config.json ...
	I0813 21:12:42.162197  436296 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/config.json: {Name:mkf16989dd7f37e3f1839d7699f259f9e903fa2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:42.162363  436296 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:12:42.162394  436296 start.go:313] acquiring machines lock for auto-20210813205925-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:12:42.162453  436296 start.go:317] acquired machines lock for "auto-20210813205925-393438" in 39.857µs
	I0813 21:12:42.162478  436296 start.go:89] Provisioning new machine with config: &{Name:auto-20210813205925-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.21.3 ClusterName:auto-20210813205925-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:12:42.162599  436296 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 21:12:41.175731  435569 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0813 21:12:41.175809  435569 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:12:41.175876  435569 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:12:41.214884  435569 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:12:41.214906  435569 containerd.go:517] Images already preloaded, skipping extraction
	I0813 21:12:41.214950  435569 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:12:41.252094  435569 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:12:41.252126  435569 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:12:41.252207  435569 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:12:41.289679  435569 cni.go:93] Creating CNI manager for ""
	I0813 21:12:41.289699  435569 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:12:41.289710  435569 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0813 21:12:41.289724  435569 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210813211202-393438 NodeName:newest-cni-20210813211202-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true
leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:12:41.289867  435569 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20210813211202-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:12:41.289978  435569 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210813211202-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813211202-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:12:41.290042  435569 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:12:41.299692  435569 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:12:41.299771  435569 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:12:41.307525  435569 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (590 bytes)
	I0813 21:12:41.322554  435569 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:12:41.336106  435569 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0813 21:12:41.349512  435569 ssh_runner.go:149] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I0813 21:12:41.354341  435569 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:12:41.367797  435569 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438 for IP: 192.168.61.119
	I0813 21:12:41.367851  435569 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:12:41.367874  435569 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:12:41.367932  435569 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.key
	I0813 21:12:41.367944  435569 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.crt with IP's: []
	I0813 21:12:41.641141  435569 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.crt ...
	I0813 21:12:41.641168  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.crt: {Name:mk09e521f9edb034f8aaad0698c8e79df5677721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:41.641340  435569 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.key ...
	I0813 21:12:41.641359  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.key: {Name:mkec11bd39bd0f9d2e3b2635f1eaa5d58be99340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:41.641478  435569 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key.f18f0481
	I0813 21:12:41.641490  435569 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt.f18f0481 with IP's: [192.168.61.119 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 21:12:41.886160  435569 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt.f18f0481 ...
	I0813 21:12:41.886187  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt.f18f0481: {Name:mk61fc2aad29c092c6e210eb1d540d0d95661689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:41.886397  435569 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key.f18f0481 ...
	I0813 21:12:41.886415  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key.f18f0481: {Name:mk14ea2e7606198f4949de88d784e8f460bc7273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:41.886522  435569 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt.f18f0481 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt
	I0813 21:12:41.886581  435569 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key.f18f0481 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key
	I0813 21:12:41.886637  435569 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.key
	I0813 21:12:41.886649  435569 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.crt with IP's: []
	I0813 21:12:42.037320  435569 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.crt ...
	I0813 21:12:42.037358  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.crt: {Name:mk84614da15c1e2c3b760f168cef37ae8e8d74a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:42.037570  435569 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.key ...
	I0813 21:12:42.037591  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.key: {Name:mk0311ad4bfad8801a1e24131f219a083ec2fe61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:42.037840  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:12:42.037895  435569 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:12:42.037911  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:12:42.037951  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:12:42.037986  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:12:42.038018  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:12:42.038081  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:12:42.039326  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:12:42.059990  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:12:42.080299  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:12:42.104519  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:12:42.126817  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:12:42.147577  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:12:42.169539  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:12:42.190097  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:12:42.207185  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:12:42.227830  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:12:42.247569  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:12:42.265540  435569 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:12:42.278263  435569 ssh_runner.go:149] Run: openssl version
	I0813 21:12:42.284450  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:12:42.293028  435569 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:12:42.297835  435569 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:12:42.297891  435569 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:12:42.304258  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:12:42.312584  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:12:42.321104  435569 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:12:42.326032  435569 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:12:42.326075  435569 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:12:42.332142  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:12:42.340129  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:12:42.347747  435569 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:42.352410  435569 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:42.352443  435569 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:42.358219  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:12:42.366060  435569 kubeadm.go:390] StartCluster: {Name:newest-cni-20210813211202-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22
.0-rc.0 ClusterName:newest-cni-20210813211202-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:12:42.366141  435569 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:12:42.366180  435569 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:12:39.090342  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:39.090383  434502 pod_ready.go:81] duration metric: took 4m0.416213127s waiting for pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace to be "Ready" ...
	E0813 21:12:39.090394  434502 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:12:39.090417  434502 pod_ready.go:38] duration metric: took 4m39.85855826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:12:39.090451  434502 kubeadm.go:604] restartCluster took 6m28.851872519s
	W0813 21:12:39.090600  434502 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:12:39.090644  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	c003d70f07d34       523cad1a4df73       32 seconds ago       Exited              dashboard-metrics-scraper   3                   eec8b69c5991b
	2155aae82b386       eb516548c180f       56 seconds ago       Running             coredns                     1                   2272e1367f1cd
	06a5d5f9669ef       9a07b5b4bfac0       About a minute ago   Running             kubernetes-dashboard        0                   1adc61e97bc2b
	7274a6c3c6c7c       6e38f40d628db       About a minute ago   Running             storage-provisioner         0                   8c90fbae3ef84
	0292daffd3bf6       eb516548c180f       About a minute ago   Exited              coredns                     0                   2272e1367f1cd
	404793ed86da6       5cd54e388abaf       About a minute ago   Running             kube-proxy                  0                   2d648249cd7af
	e6043e9c5e706       2c4adeb21b4ff       About a minute ago   Running             etcd                        0                   cad4450810105
	dba11fd3e73c7       b95b1efa0436b       About a minute ago   Running             kube-controller-manager     0                   4f286622b41c9
	087dc8434df74       00638a24688b0       About a minute ago   Running             kube-scheduler              0                   5b9fabbf68980
	489ae9807a474       ecf910f40d6e0       About a minute ago   Running             kube-apiserver              0                   d23a6e5406695
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 21:04:37 UTC, end at Fri 2021-08-13 21:12:44 UTC. --
	Aug 13 21:12:11 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:11.665548427Z" level=info msg="CreateContainer within sandbox \"eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,} returns container id \"c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5\""
	Aug 13 21:12:11 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:11.666968197Z" level=info msg="StartContainer for \"c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.010850862Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.139676932Z" level=info msg="StartContainer for \"c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5\" returns successfully"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.171866306Z" level=info msg="Finish piping \"stdout\" of container exec \"7d0252242d7e8cf9189950fd9dcc18cfcfe5fab23805c44667d65f03b447d759\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.172278644Z" level=info msg="Finish piping \"stderr\" of container exec \"7d0252242d7e8cf9189950fd9dcc18cfcfe5fab23805c44667d65f03b447d759\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.173259529Z" level=info msg="Exec process \"7d0252242d7e8cf9189950fd9dcc18cfcfe5fab23805c44667d65f03b447d759\" exits with exit code 0 and error <nil>"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.174648763Z" level=info msg="Finish piping stderr of container \"c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.174959877Z" level=info msg="Finish piping stdout of container \"c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.178600286Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" returns with exit code 0"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.180449361Z" level=info msg="TaskExit event &TaskExit{ContainerID:c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5,ID:c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5,Pid:8721,ExitStatus:1,ExitedAt:2021-08-13 21:12:12.179615565 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.243224578Z" level=info msg="shim disconnected" id=c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.243749099Z" level=error msg="copy shim log" error="read /proc/self/fd/126: file already closed"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.285111331Z" level=info msg="RemoveContainer for \"204be8722375961d673866a98d6de643faa15a7aca9b6eac5006f5a2bd1ce54e\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.296826300Z" level=info msg="RemoveContainer for \"204be8722375961d673866a98d6de643faa15a7aca9b6eac5006f5a2bd1ce54e\" returns successfully"
	Aug 13 21:12:22 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:22.011234858Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 21:12:22 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:22.117422220Z" level=info msg="Finish piping \"stderr\" of container exec \"ea947e3c5594485f0fa20782384d4c5c698d5c3843a508e07aa3059f687670e4\""
	Aug 13 21:12:22 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:22.117718335Z" level=info msg="Finish piping \"stdout\" of container exec \"ea947e3c5594485f0fa20782384d4c5c698d5c3843a508e07aa3059f687670e4\""
	Aug 13 21:12:22 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:22.119307382Z" level=info msg="Exec process \"ea947e3c5594485f0fa20782384d4c5c698d5c3843a508e07aa3059f687670e4\" exits with exit code 0 and error <nil>"
	Aug 13 21:12:22 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:22.122976964Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" returns with exit code 0"
	Aug 13 21:12:32 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:32.011718869Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 21:12:32 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:32.132639721Z" level=info msg="Finish piping \"stderr\" of container exec \"76b86c840b92d501dba693e13b79fd17157312bc8f1a57e6b6ccdfabba1759f4\""
	Aug 13 21:12:32 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:32.133644455Z" level=info msg="Exec process \"76b86c840b92d501dba693e13b79fd17157312bc8f1a57e6b6ccdfabba1759f4\" exits with exit code 0 and error <nil>"
	Aug 13 21:12:32 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:32.134077178Z" level=info msg="Finish piping \"stdout\" of container exec \"76b86c840b92d501dba693e13b79fd17157312bc8f1a57e6b6ccdfabba1759f4\""
	Aug 13 21:12:32 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:32.143064558Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" returns with exit code 0"
	
	* 
	* ==> coredns [0292daffd3bf6a4ddd0f4384fce377799efbf1e96de28aeb16cba46af2c1be35] <==
	* .:53
	2021-08-13T21:11:21.852Z [INFO] CoreDNS-1.3.1
	2021-08-13T21:11:21.852Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T21:11:21.852Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
	E0813 21:11:46.853800       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 21:11:46.853800       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-fb8b8dccf-vlm5d.unknownuser.log.ERROR.20210813-211146.1: no such file or directory
	
	* 
	* ==> coredns [2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed] <==
	* .:53
	2021-08-13T21:11:47.547Z [INFO] CoreDNS-1.3.1
	2021-08-13T21:11:47.547Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T21:11:47.547Z [INFO] plugin/reload: Running configuration MD5 = 6c0e799ff6797682aae95e2097dfc0d9
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210813205952-393438
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210813205952-393438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=old-k8s-version-20210813205952-393438
	                    minikube.k8s.io/updated_at=2021_08_13T21_11_00_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 21:10:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 21:11:55 +0000   Fri, 13 Aug 2021 21:10:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 21:11:55 +0000   Fri, 13 Aug 2021 21:10:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 21:11:55 +0000   Fri, 13 Aug 2021 21:10:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 21:11:55 +0000   Fri, 13 Aug 2021 21:11:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.180
	  Hostname:    old-k8s-version-20210813205952-393438
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2186320Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2186320Ki
	 pods:               110
	System Info:
	 Machine ID:                 b49590f2d3e8410c89f829137ae0deb7
	 System UUID:                b49590f2-d3e8-410c-89f8-29137ae0deb7
	 Boot ID:                    185a9652-d3b5-460d-a40a-2091690a90c6
	 Kernel Version:             4.19.182
	 OS Image:                   Buildroot 2020.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.9
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-fb8b8dccf-vlm5d                                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     89s
	  kube-system                etcd-old-k8s-version-20210813205952-393438                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                kube-apiserver-old-k8s-version-20210813205952-393438             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                kube-controller-manager-old-k8s-version-20210813205952-393438    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                kube-proxy-zqww7                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                kube-scheduler-old-k8s-version-20210813205952-393438             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                metrics-server-8546d8b77b-xv8fc                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (14%!)(MISSING)      0 (0%!)(MISSING)         85s
	  kube-system                storage-provisioner                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-dws7v                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-7pkrv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                               Message
	  ----    ------                   ----                 ----                                               -------
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet, old-k8s-version-20210813205952-393438     Node old-k8s-version-20210813205952-393438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet, old-k8s-version-20210813205952-393438     Node old-k8s-version-20210813205952-393438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 116s)  kubelet, old-k8s-version-20210813205952-393438     Node old-k8s-version-20210813205952-393438 status is now: NodeHasSufficientPID
	  Normal  Starting                 88s                  kube-proxy, old-k8s-version-20210813205952-393438  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +3.320831] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.031760] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.925391] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1735 comm=systemd-network
	[  +0.748465] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.353230] vboxguest: loading out-of-tree module taints kernel.
	[  +0.009635] vboxguest: PCI device not found, probably running on physical hardware.
	[ +19.819445] systemd-fstab-generator[2069]: Ignoring "noauto" for root device
	[  +0.260991] systemd-fstab-generator[2100]: Ignoring "noauto" for root device
	[  +0.154391] systemd-fstab-generator[2116]: Ignoring "noauto" for root device
	[  +0.240964] systemd-fstab-generator[2147]: Ignoring "noauto" for root device
	[Aug13 21:05] systemd-fstab-generator[2337]: Ignoring "noauto" for root device
	[Aug13 21:06] kauditd_printk_skb: 20 callbacks suppressed
	[ +29.372695] kauditd_printk_skb: 125 callbacks suppressed
	[  +3.378760] NFSD: Unable to end grace period: -110
	[  +3.197846] kauditd_printk_skb: 17 callbacks suppressed
	[Aug13 21:07] kauditd_printk_skb: 2 callbacks suppressed
	[Aug13 21:10] systemd-fstab-generator[6255]: Ignoring "noauto" for root device
	[Aug13 21:11] tee (6716): /proc/6520/oom_adj is deprecated, please use /proc/6520/oom_score_adj instead.
	[ +14.712988] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.345279] kauditd_printk_skb: 185 callbacks suppressed
	[ +36.451508] kauditd_printk_skb: 44 callbacks suppressed
	[Aug13 21:12] systemd-fstab-generator[8918]: Ignoring "noauto" for root device
	[  +0.847915] systemd-fstab-generator[8974]: Ignoring "noauto" for root device
	[  +0.998053] systemd-fstab-generator[9027]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134] <==
	* 2021-08-13 21:10:51.739909 I | embed: serving client requests on 127.0.0.1:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-13 21:12:34.722527 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:4023" took too long (1.584817625s) to execute
	2021-08-13 21:12:34.722782 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (1.753344359s) to execute
	2021-08-13 21:12:34.723007 W | etcdserver: request "header:<ID:13894303321011756213 > lease_revoke:<id:40d27b415a6f9c70>" with result "size:29" took too long (449.827313ms) to execute
	2021-08-13 21:12:35.859266 W | wal: sync duration of 1.586197491s, expected less than 1s
	2021-08-13 21:12:38.923038 W | etcdserver: read-only range request "key:\"/registry/clusterroles\" range_end:\"/registry/clusterrolet\" count_only:true " with result "range_response_count:0 size:7" took too long (699.960666ms) to execute
	2021-08-13 21:12:38.923870 W | etcdserver: read-only range request "key:\"/registry/priorityclasses\" range_end:\"/registry/priorityclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (3.694306777s) to execute
	2021-08-13 21:12:38.924188 W | etcdserver: read-only range request "key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" " with result "range_response_count:1 size:2567" took too long (4.133402067s) to execute
	2021-08-13 21:12:38.924441 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:5" took too long (2.914366219s) to execute
	2021-08-13 21:12:38.924704 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (2.797046643s) to execute
	2021-08-13 21:12:38.924924 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-xv8fc.169af9fd2786670d\" " with result "range_response_count:1 size:513" took too long (2.308950967s) to execute
	2021-08-13 21:12:38.924995 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3513" took too long (3.68994946s) to execute
	2021-08-13 21:12:38.925186 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (4.195703284s) to execute
	2021-08-13 21:12:38.926301 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:767" took too long (4.205693806s) to execute
	2021-08-13 21:12:38.928044 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210813205952-393438\" " with result "range_response_count:1 size:404" took too long (3.073607517s) to execute
	2021-08-13 21:12:38.929119 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:5" took too long (3.62341689s) to execute
	2021-08-13 21:12:39.059193 W | etcdserver: read-only range request "key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" " with result "range_response_count:1 size:2567" took too long (124.844917ms) to execute
	2021-08-13 21:12:39.066354 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-xv8fc.169af9fd27862677\" " with result "range_response_count:1 size:552" took too long (124.642984ms) to execute
	2021-08-13 21:12:39.066934 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (100.501941ms) to execute
	2021-08-13 21:12:40.045113 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (655.255821ms) to execute
	2021-08-13 21:12:40.046124 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:293" took too long (969.356868ms) to execute
	2021-08-13 21:12:40.048215 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/\" range_end:\"/registry/events/kubernetes-dashboard0\" " with result "range_response_count:17 size:10109" took too long (965.808767ms) to execute
	2021-08-13 21:12:40.050694 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions\" range_end:\"/registry/controllerrevisiont\" count_only:true " with result "range_response_count:0 size:7" took too long (555.230569ms) to execute
	
	* 
	* ==> kernel <==
	*  21:12:44 up 8 min,  0 users,  load average: 1.22, 0.86, 0.40
	Linux old-k8s-version-20210813205952-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c] <==
	* Trace[1765678340]: [4.202079862s] [4.2020523s] Listing from storage done
	I0813 21:12:38.931986       1 trace.go:81] Trace[468325682]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/old-k8s-version-20210813205952-393438" (started: 2021-08-13 21:12:35.85366533 +0000 UTC m=+105.979797111) (total time: 3.07830186s):
	Trace[468325682]: [3.07823032s] [3.078176869s] About to write a response
	I0813 21:12:38.932531       1 trace.go:81] Trace[1362351978]: "Get /api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" (started: 2021-08-13 21:12:34.719365835 +0000 UTC m=+104.845497406) (total time: 4.213084343s):
	Trace[1362351978]: [4.212998089s] [4.212971782s] About to write a response
	I0813 21:12:38.938596       1 trace.go:81] Trace[1878302692]: "GuaranteedUpdate etcd3: *core.Event" (started: 2021-08-13 21:12:36.614735396 +0000 UTC m=+106.740867101) (total time: 2.323843716s):
	Trace[1878302692]: [2.312248067s] [2.312248067s] initial value restored
	I0813 21:12:38.939194       1 trace.go:81] Trace[209371397]: "Patch /api/v1/namespaces/kube-system/events/metrics-server-8546d8b77b-xv8fc.169af9fd2786670d" (started: 2021-08-13 21:12:36.614424249 +0000 UTC m=+106.740555951) (total time: 2.324756028s):
	Trace[209371397]: [2.312562711s] [2.312337403s] About to apply patch
	I0813 21:12:39.211714       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:39.211929       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:40.051770       1 trace.go:81] Trace[121213177]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2021-08-13 21:12:39.072186474 +0000 UTC m=+109.198318208) (total time: 979.515663ms):
	Trace[121213177]: [978.518308ms] [978.502443ms] About to write a response
	I0813 21:12:40.055874       1 trace.go:81] Trace[1839725854]: "List /api/v1/namespaces/kubernetes-dashboard/events" (started: 2021-08-13 21:12:39.078921636 +0000 UTC m=+109.205053343) (total time: 976.936003ms):
	Trace[1839725854]: [976.739184ms] [976.662292ms] Listing from storage done
	I0813 21:12:40.212694       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:40.213742       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:41.214378       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:41.214877       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:42.215133       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:42.216170       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:43.217008       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:43.217275       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:44.219309       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:44.219622       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689] <==
	* E0813 21:11:19.011593       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.012299       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.012748       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.013144       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"00a9f8a1-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.045861       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.046265       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.100810       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.100884       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.100930       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.100971       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"00a9f8a1-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.127229       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.127289       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"00a9f8a1-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.140058       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.140116       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.221212       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.221309       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.221445       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"00a9f8a1-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.221538       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.420026       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"004dbe06-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-xv8fc
	I0813 21:11:19.553426       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"00a9f8a1-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-dws7v
	I0813 21:11:19.570880       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-7pkrv
	E0813 21:11:45.043383       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:11:47.598231       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 21:12:15.296877       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:12:19.601383       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0] <==
	* W0813 21:11:16.532298       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 21:11:16.601905       1 server_others.go:148] Using iptables Proxier.
	I0813 21:11:16.602135       1 server_others.go:178] Tearing down inactive rules.
	E0813 21:11:16.718160       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0813 21:11:16.873112       1 server.go:555] Version: v1.14.0
	I0813 21:11:16.883302       1 config.go:202] Starting service config controller
	I0813 21:11:16.883337       1 config.go:102] Starting endpoints config controller
	I0813 21:11:16.883593       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 21:11:16.883594       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 21:11:16.986783       1 controller_utils.go:1034] Caches are synced for service config controller
	I0813 21:11:16.987750       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-scheduler [087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e] <==
	* W0813 21:10:51.144977       1 authentication.go:55] Authentication is disabled
	I0813 21:10:51.145154       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 21:10:51.147327       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 21:10:55.622200       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:10:55.622789       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:10:55.622837       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:55.622878       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:10:55.622920       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:10:55.622961       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:10:55.623010       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:10:55.625243       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:10:55.625292       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:10:55.632202       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:10:56.630420       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:10:56.632544       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:56.632606       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:10:56.632646       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:10:56.632979       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:10:56.635371       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:10:56.638899       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:10:56.641800       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:10:56.643286       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:10:56.645027       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0813 21:10:58.450867       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 21:10:58.551380       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:04:37 UTC, end at Fri 2021-08-13 21:12:44 UTC. --
	Aug 13 21:11:20 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:20.933069    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:11:29 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:29.092415    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:11:30 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:30.098449    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:11:32 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:32.623186    6274 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:32 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:32.623235    6274 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:32 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:32.623305    6274 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:32 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:32.623347    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 13 21:11:36 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:36.346701    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:11:47 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:47.610534    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:11:48 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:48.194017    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:11:56 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:56.342969    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:11:58 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:58.626294    6274 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:58 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:58.628308    6274 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:58 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:58.628612    6274 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:58 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:58.628799    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 13 21:12:11 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:11.611440    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:12.280912    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:12:16 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:16.343871    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:12:24 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:24.609990    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:12:29 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:29.609111    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:12:36 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:36.611262    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:12:40 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:40.612285    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:12:40 old-k8s-version-20210813205952-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:12:40 old-k8s-version-20210813205952-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:12:40 old-k8s-version-20210813205952-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd] <==
	* 2021/08/13 21:11:20 Starting overwatch
	2021/08/13 21:11:20 Using namespace: kubernetes-dashboard
	2021/08/13 21:11:20 Using in-cluster config to connect to apiserver
	2021/08/13 21:11:20 Using secret token for csrf signing
	2021/08/13 21:11:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:11:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:11:20 Successful initial request to the apiserver, version: v1.14.0
	2021/08/13 21:11:20 Generating JWE encryption key
	2021/08/13 21:11:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:11:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:11:21 Initializing JWE encryption key from synchronized object
	2021/08/13 21:11:21 Creating in-cluster Sidecar client
	2021/08/13 21:11:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:11:21 Serving insecurely on HTTP port: 9090
	2021/08/13 21:11:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:12:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00] <==
	* I0813 21:11:20.119046       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:11:20.157246       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:11:20.158927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:11:20.187615       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:11:20.188690       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00868f7b-fc7b-11eb-a3a8-525400553b5e", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210813205952-393438_96b1e176-50b2-43b0-adaf-0d89063ffb0c became leader
	I0813 21:11:20.191702       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813205952-393438_96b1e176-50b2-43b0-adaf-0d89063ffb0c!
	I0813 21:11:20.294070       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813205952-393438_96b1e176-50b2-43b0-adaf-0d89063ffb0c!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813205952-393438 -n old-k8s-version-20210813205952-393438
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813205952-393438 -n old-k8s-version-20210813205952-393438: exit status 2 (280.782163ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210813205952-393438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-xv8fc
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210813205952-393438 describe pod metrics-server-8546d8b77b-xv8fc
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210813205952-393438 describe pod metrics-server-8546d8b77b-xv8fc: exit status 1 (111.360002ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-xv8fc" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210813205952-393438 describe pod metrics-server-8546d8b77b-xv8fc: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205952-393438 -n old-k8s-version-20210813205952-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205952-393438 -n old-k8s-version-20210813205952-393438: exit status 2 (334.572749ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210813205952-393438 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20210813205952-393438 logs -n 25: (1.422381388s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:17 UTC | Fri, 13 Aug 2021 21:03:18 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:01:15 UTC | Fri, 13 Aug 2021 21:03:20 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:27 UTC | Fri, 13 Aug 2021 21:03:28 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:29 UTC | Fri, 13 Aug 2021 21:03:30 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                  |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:02:54 UTC | Fri, 13 Aug 2021 21:04:26 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:04:27 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:18 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:28 UTC | Fri, 13 Aug 2021 21:05:01 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:52 UTC | Fri, 13 Aug 2021 21:11:53 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                  | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:56 UTC | Fri, 13 Aug 2021 21:11:57 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:11:59 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                  | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:58 UTC | Fri, 13 Aug 2021 21:12:00 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:01 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:02 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:13 UTC | Fri, 13 Aug 2021 21:12:13 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:12:23 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:40 UTC | Fri, 13 Aug 2021 21:12:40 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:40 UTC | Fri, 13 Aug 2021 21:12:41 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:41 UTC | Fri, 13 Aug 2021 21:12:41 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205952-393438             | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:43 UTC | Fri, 13 Aug 2021 21:12:44 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:12:42
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:12:42.056773  436296 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:12:42.056840  436296 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:42.056844  436296 out.go:311] Setting ErrFile to fd 2...
	I0813 21:12:42.056847  436296 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:42.056949  436296 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:12:42.057232  436296 out.go:305] Setting JSON to false
	I0813 21:12:42.100088  436296 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6924,"bootTime":1628882238,"procs":183,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:12:42.100233  436296 start.go:121] virtualization: kvm guest
	I0813 21:12:42.102911  436296 out.go:177] * [auto-20210813205925-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:12:42.104650  436296 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:12:42.103060  436296 notify.go:169] Checking for updates...
	I0813 21:12:42.106179  436296 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:12:42.107577  436296 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:12:42.109050  436296 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:12:42.109665  436296 config.go:177] Loaded profile config "embed-certs-20210813210115-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:12:42.109826  436296 config.go:177] Loaded profile config "newest-cni-20210813211202-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:12:42.109991  436296 config.go:177] Loaded profile config "old-k8s-version-20210813205952-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0813 21:12:42.110047  436296 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:12:42.144281  436296 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 21:12:42.144311  436296 start.go:278] selected driver: kvm2
	I0813 21:12:42.144318  436296 start.go:751] validating driver "kvm2" against <nil>
	I0813 21:12:42.144340  436296 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:12:42.145746  436296 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:12:42.145915  436296 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:12:42.159569  436296 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:12:42.159621  436296 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 21:12:42.159763  436296 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 21:12:42.159786  436296 cni.go:93] Creating CNI manager for ""
	I0813 21:12:42.159792  436296 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:12:42.159802  436296 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 21:12:42.159835  436296 start_flags.go:277] config:
	{Name:auto-20210813205925-393438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:auto-20210813205925-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:12:42.159928  436296 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:12:42.161785  436296 out.go:177] * Starting control plane node auto-20210813205925-393438 in cluster auto-20210813205925-393438
	I0813 21:12:42.161816  436296 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:12:42.161847  436296 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 21:12:42.161873  436296 cache.go:56] Caching tarball of preloaded images
	I0813 21:12:42.162010  436296 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 21:12:42.162031  436296 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 21:12:42.162170  436296 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/config.json ...
	I0813 21:12:42.162197  436296 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/config.json: {Name:mkf16989dd7f37e3f1839d7699f259f9e903fa2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:42.162363  436296 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:12:42.162394  436296 start.go:313] acquiring machines lock for auto-20210813205925-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:12:42.162453  436296 start.go:317] acquired machines lock for "auto-20210813205925-393438" in 39.857µs
	I0813 21:12:42.162478  436296 start.go:89] Provisioning new machine with config: &{Name:auto-20210813205925-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.21.3 ClusterName:auto-20210813205925-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:12:42.162599  436296 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 21:12:41.175731  435569 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0813 21:12:41.175809  435569 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:12:41.175876  435569 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:12:41.214884  435569 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:12:41.214906  435569 containerd.go:517] Images already preloaded, skipping extraction
	I0813 21:12:41.214950  435569 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:12:41.252094  435569 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:12:41.252126  435569 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:12:41.252207  435569 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:12:41.289679  435569 cni.go:93] Creating CNI manager for ""
	I0813 21:12:41.289699  435569 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:12:41.289710  435569 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0813 21:12:41.289724  435569 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210813211202-393438 NodeName:newest-cni-20210813211202-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true
leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:12:41.289867  435569 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20210813211202-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:12:41.289978  435569 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210813211202-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813211202-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:12:41.290042  435569 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0813 21:12:41.299692  435569 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:12:41.299771  435569 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:12:41.307525  435569 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (590 bytes)
	I0813 21:12:41.322554  435569 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0813 21:12:41.336106  435569 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0813 21:12:41.349512  435569 ssh_runner.go:149] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I0813 21:12:41.354341  435569 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:12:41.367797  435569 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438 for IP: 192.168.61.119
	I0813 21:12:41.367851  435569 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:12:41.367874  435569 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:12:41.367932  435569 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.key
	I0813 21:12:41.367944  435569 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.crt with IP's: []
	I0813 21:12:41.641141  435569 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.crt ...
	I0813 21:12:41.641168  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.crt: {Name:mk09e521f9edb034f8aaad0698c8e79df5677721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:41.641340  435569 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.key ...
	I0813 21:12:41.641359  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/client.key: {Name:mkec11bd39bd0f9d2e3b2635f1eaa5d58be99340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:41.641478  435569 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key.f18f0481
	I0813 21:12:41.641490  435569 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt.f18f0481 with IP's: [192.168.61.119 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 21:12:41.886160  435569 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt.f18f0481 ...
	I0813 21:12:41.886187  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt.f18f0481: {Name:mk61fc2aad29c092c6e210eb1d540d0d95661689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:41.886397  435569 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key.f18f0481 ...
	I0813 21:12:41.886415  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key.f18f0481: {Name:mk14ea2e7606198f4949de88d784e8f460bc7273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:41.886522  435569 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt.f18f0481 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt
	I0813 21:12:41.886581  435569 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key.f18f0481 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key
	I0813 21:12:41.886637  435569 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.key
	I0813 21:12:41.886649  435569 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.crt with IP's: []
	I0813 21:12:42.037320  435569 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.crt ...
	I0813 21:12:42.037358  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.crt: {Name:mk84614da15c1e2c3b760f168cef37ae8e8d74a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:42.037570  435569 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.key ...
	I0813 21:12:42.037591  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.key: {Name:mk0311ad4bfad8801a1e24131f219a083ec2fe61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:42.037840  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:12:42.037895  435569 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:12:42.037911  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:12:42.037951  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:12:42.037986  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:12:42.038018  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:12:42.038081  435569 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:12:42.039326  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:12:42.059990  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:12:42.080299  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:12:42.104519  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:12:42.126817  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:12:42.147577  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:12:42.169539  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:12:42.190097  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:12:42.207185  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:12:42.227830  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:12:42.247569  435569 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:12:42.265540  435569 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:12:42.278263  435569 ssh_runner.go:149] Run: openssl version
	I0813 21:12:42.284450  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:12:42.293028  435569 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:12:42.297835  435569 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:12:42.297891  435569 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:12:42.304258  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:12:42.312584  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:12:42.321104  435569 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:12:42.326032  435569 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:12:42.326075  435569 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:12:42.332142  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:12:42.340129  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:12:42.347747  435569 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:42.352410  435569 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:42.352443  435569 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:12:42.358219  435569 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:12:42.366060  435569 kubeadm.go:390] StartCluster: {Name:newest-cni-20210813211202-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22
.0-rc.0 ClusterName:newest-cni-20210813211202-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:12:42.366141  435569 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:12:42.366180  435569 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:12:39.090342  434502 pod_ready.go:102] pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace has status "Ready":"False"
	I0813 21:12:39.090383  434502 pod_ready.go:81] duration metric: took 4m0.416213127s waiting for pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace to be "Ready" ...
	E0813 21:12:39.090394  434502 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-8nk4r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0813 21:12:39.090417  434502 pod_ready.go:38] duration metric: took 4m39.85855826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:12:39.090451  434502 kubeadm.go:604] restartCluster took 6m28.851872519s
	W0813 21:12:39.090600  434502 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0813 21:12:39.090644  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0813 21:12:43.507165  434502 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (4.416494277s)
	I0813 21:12:43.507241  434502 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 21:12:43.520696  434502 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 21:12:43.520766  434502 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:12:43.573279  434502 cri.go:76] found id: ""
	I0813 21:12:43.573361  434502 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:12:43.582712  434502 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:12:43.591783  434502 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:12:43.591823  434502 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:12:44.191410  434502 out.go:204]   - Generating certificates and keys ...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	c003d70f07d34       523cad1a4df73       34 seconds ago       Exited              dashboard-metrics-scraper   3                   eec8b69c5991b
	2155aae82b386       eb516548c180f       59 seconds ago       Running             coredns                     1                   2272e1367f1cd
	06a5d5f9669ef       9a07b5b4bfac0       About a minute ago   Running             kubernetes-dashboard        0                   1adc61e97bc2b
	7274a6c3c6c7c       6e38f40d628db       About a minute ago   Running             storage-provisioner         0                   8c90fbae3ef84
	0292daffd3bf6       eb516548c180f       About a minute ago   Exited              coredns                     0                   2272e1367f1cd
	404793ed86da6       5cd54e388abaf       About a minute ago   Running             kube-proxy                  0                   2d648249cd7af
	e6043e9c5e706       2c4adeb21b4ff       About a minute ago   Running             etcd                        0                   cad4450810105
	dba11fd3e73c7       b95b1efa0436b       About a minute ago   Running             kube-controller-manager     0                   4f286622b41c9
	087dc8434df74       00638a24688b0       About a minute ago   Running             kube-scheduler              0                   5b9fabbf68980
	489ae9807a474       ecf910f40d6e0       About a minute ago   Running             kube-apiserver              0                   d23a6e5406695
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 21:04:37 UTC, end at Fri 2021-08-13 21:12:46 UTC. --
	Aug 13 21:12:11 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:11.665548427Z" level=info msg="CreateContainer within sandbox \"eec8b69c5991b5d02a18216d1edc4e15b58013599238c4128eea5c279a07ce42\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,} returns container id \"c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5\""
	Aug 13 21:12:11 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:11.666968197Z" level=info msg="StartContainer for \"c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.010850862Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.139676932Z" level=info msg="StartContainer for \"c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5\" returns successfully"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.171866306Z" level=info msg="Finish piping \"stdout\" of container exec \"7d0252242d7e8cf9189950fd9dcc18cfcfe5fab23805c44667d65f03b447d759\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.172278644Z" level=info msg="Finish piping \"stderr\" of container exec \"7d0252242d7e8cf9189950fd9dcc18cfcfe5fab23805c44667d65f03b447d759\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.173259529Z" level=info msg="Exec process \"7d0252242d7e8cf9189950fd9dcc18cfcfe5fab23805c44667d65f03b447d759\" exits with exit code 0 and error <nil>"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.174648763Z" level=info msg="Finish piping stderr of container \"c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.174959877Z" level=info msg="Finish piping stdout of container \"c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.178600286Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" returns with exit code 0"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.180449361Z" level=info msg="TaskExit event &TaskExit{ContainerID:c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5,ID:c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5,Pid:8721,ExitStatus:1,ExitedAt:2021-08-13 21:12:12.179615565 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.243224578Z" level=info msg="shim disconnected" id=c003d70f07d34934465f86576a3275e6564d97df9675e63cd9ea9f173ef3ded5
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.243749099Z" level=error msg="copy shim log" error="read /proc/self/fd/126: file already closed"
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.285111331Z" level=info msg="RemoveContainer for \"204be8722375961d673866a98d6de643faa15a7aca9b6eac5006f5a2bd1ce54e\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:12.296826300Z" level=info msg="RemoveContainer for \"204be8722375961d673866a98d6de643faa15a7aca9b6eac5006f5a2bd1ce54e\" returns successfully"
	Aug 13 21:12:22 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:22.011234858Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 21:12:22 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:22.117422220Z" level=info msg="Finish piping \"stderr\" of container exec \"ea947e3c5594485f0fa20782384d4c5c698d5c3843a508e07aa3059f687670e4\""
	Aug 13 21:12:22 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:22.117718335Z" level=info msg="Finish piping \"stdout\" of container exec \"ea947e3c5594485f0fa20782384d4c5c698d5c3843a508e07aa3059f687670e4\""
	Aug 13 21:12:22 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:22.119307382Z" level=info msg="Exec process \"ea947e3c5594485f0fa20782384d4c5c698d5c3843a508e07aa3059f687670e4\" exits with exit code 0 and error <nil>"
	Aug 13 21:12:22 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:22.122976964Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" returns with exit code 0"
	Aug 13 21:12:32 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:32.011718869Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 21:12:32 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:32.132639721Z" level=info msg="Finish piping \"stderr\" of container exec \"76b86c840b92d501dba693e13b79fd17157312bc8f1a57e6b6ccdfabba1759f4\""
	Aug 13 21:12:32 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:32.133644455Z" level=info msg="Exec process \"76b86c840b92d501dba693e13b79fd17157312bc8f1a57e6b6ccdfabba1759f4\" exits with exit code 0 and error <nil>"
	Aug 13 21:12:32 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:32.134077178Z" level=info msg="Finish piping \"stdout\" of container exec \"76b86c840b92d501dba693e13b79fd17157312bc8f1a57e6b6ccdfabba1759f4\""
	Aug 13 21:12:32 old-k8s-version-20210813205952-393438 containerd[2157]: time="2021-08-13T21:12:32.143064558Z" level=info msg="ExecSync for \"e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134\" returns with exit code 0"
	
	* 
	* ==> coredns [0292daffd3bf6a4ddd0f4384fce377799efbf1e96de28aeb16cba46af2c1be35] <==
	* .:53
	2021-08-13T21:11:21.852Z [INFO] CoreDNS-1.3.1
	2021-08-13T21:11:21.852Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T21:11:21.852Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
	E0813 21:11:46.853800       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0813 21:11:46.853800       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-fb8b8dccf-vlm5d.unknownuser.log.ERROR.20210813-211146.1: no such file or directory
	
	* 
	* ==> coredns [2155aae82b38690ffce23668254fc6d4bd4260510c131d7a24eff3d3520e31ed] <==
	* .:53
	2021-08-13T21:11:47.547Z [INFO] CoreDNS-1.3.1
	2021-08-13T21:11:47.547Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-13T21:11:47.547Z [INFO] plugin/reload: Running configuration MD5 = 6c0e799ff6797682aae95e2097dfc0d9
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210813205952-393438
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210813205952-393438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c
	                    minikube.k8s.io/name=old-k8s-version-20210813205952-393438
	                    minikube.k8s.io/updated_at=2021_08_13T21_11_00_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 21:10:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 21:11:55 +0000   Fri, 13 Aug 2021 21:10:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 21:11:55 +0000   Fri, 13 Aug 2021 21:10:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 21:11:55 +0000   Fri, 13 Aug 2021 21:10:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 21:11:55 +0000   Fri, 13 Aug 2021 21:11:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.180
	  Hostname:    old-k8s-version-20210813205952-393438
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2186320Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2186320Ki
	 pods:               110
	System Info:
	 Machine ID:                 b49590f2d3e8410c89f829137ae0deb7
	 System UUID:                b49590f2-d3e8-410c-89f8-29137ae0deb7
	 Boot ID:                    185a9652-d3b5-460d-a40a-2091690a90c6
	 Kernel Version:             4.19.182
	 OS Image:                   Buildroot 2020.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.9
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-fb8b8dccf-vlm5d                                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     91s
	  kube-system                etcd-old-k8s-version-20210813205952-393438                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                kube-apiserver-old-k8s-version-20210813205952-393438             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                kube-controller-manager-old-k8s-version-20210813205952-393438    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                kube-proxy-zqww7                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                kube-scheduler-old-k8s-version-20210813205952-393438             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                metrics-server-8546d8b77b-xv8fc                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (14%!)(MISSING)      0 (0%!)(MISSING)         87s
	  kube-system                storage-provisioner                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-dws7v                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-7pkrv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                               Message
	  ----    ------                   ----                 ----                                               -------
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet, old-k8s-version-20210813205952-393438     Node old-k8s-version-20210813205952-393438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet, old-k8s-version-20210813205952-393438     Node old-k8s-version-20210813205952-393438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x7 over 118s)  kubelet, old-k8s-version-20210813205952-393438     Node old-k8s-version-20210813205952-393438 status is now: NodeHasSufficientPID
	  Normal  Starting                 90s                  kube-proxy, old-k8s-version-20210813205952-393438  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +3.320831] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.031760] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.925391] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1735 comm=systemd-network
	[  +0.748465] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.353230] vboxguest: loading out-of-tree module taints kernel.
	[  +0.009635] vboxguest: PCI device not found, probably running on physical hardware.
	[ +19.819445] systemd-fstab-generator[2069]: Ignoring "noauto" for root device
	[  +0.260991] systemd-fstab-generator[2100]: Ignoring "noauto" for root device
	[  +0.154391] systemd-fstab-generator[2116]: Ignoring "noauto" for root device
	[  +0.240964] systemd-fstab-generator[2147]: Ignoring "noauto" for root device
	[Aug13 21:05] systemd-fstab-generator[2337]: Ignoring "noauto" for root device
	[Aug13 21:06] kauditd_printk_skb: 20 callbacks suppressed
	[ +29.372695] kauditd_printk_skb: 125 callbacks suppressed
	[  +3.378760] NFSD: Unable to end grace period: -110
	[  +3.197846] kauditd_printk_skb: 17 callbacks suppressed
	[Aug13 21:07] kauditd_printk_skb: 2 callbacks suppressed
	[Aug13 21:10] systemd-fstab-generator[6255]: Ignoring "noauto" for root device
	[Aug13 21:11] tee (6716): /proc/6520/oom_adj is deprecated, please use /proc/6520/oom_score_adj instead.
	[ +14.712988] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.345279] kauditd_printk_skb: 185 callbacks suppressed
	[ +36.451508] kauditd_printk_skb: 44 callbacks suppressed
	[Aug13 21:12] systemd-fstab-generator[8918]: Ignoring "noauto" for root device
	[  +0.847915] systemd-fstab-generator[8974]: Ignoring "noauto" for root device
	[  +0.998053] systemd-fstab-generator[9027]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [e6043e9c5e7060579e8cfb61341b2824be4d27952969e426cd3712139ef2d134] <==
	* 2021-08-13 21:10:51.739909 I | embed: serving client requests on 127.0.0.1:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-13 21:12:34.722527 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:4023" took too long (1.584817625s) to execute
	2021-08-13 21:12:34.722782 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (1.753344359s) to execute
	2021-08-13 21:12:34.723007 W | etcdserver: request "header:<ID:13894303321011756213 > lease_revoke:<id:40d27b415a6f9c70>" with result "size:29" took too long (449.827313ms) to execute
	2021-08-13 21:12:35.859266 W | wal: sync duration of 1.586197491s, expected less than 1s
	2021-08-13 21:12:38.923038 W | etcdserver: read-only range request "key:\"/registry/clusterroles\" range_end:\"/registry/clusterrolet\" count_only:true " with result "range_response_count:0 size:7" took too long (699.960666ms) to execute
	2021-08-13 21:12:38.923870 W | etcdserver: read-only range request "key:\"/registry/priorityclasses\" range_end:\"/registry/priorityclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (3.694306777s) to execute
	2021-08-13 21:12:38.924188 W | etcdserver: read-only range request "key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" " with result "range_response_count:1 size:2567" took too long (4.133402067s) to execute
	2021-08-13 21:12:38.924441 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:5" took too long (2.914366219s) to execute
	2021-08-13 21:12:38.924704 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (2.797046643s) to execute
	2021-08-13 21:12:38.924924 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-xv8fc.169af9fd2786670d\" " with result "range_response_count:1 size:513" took too long (2.308950967s) to execute
	2021-08-13 21:12:38.924995 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3513" took too long (3.68994946s) to execute
	2021-08-13 21:12:38.925186 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (4.195703284s) to execute
	2021-08-13 21:12:38.926301 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:767" took too long (4.205693806s) to execute
	2021-08-13 21:12:38.928044 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210813205952-393438\" " with result "range_response_count:1 size:404" took too long (3.073607517s) to execute
	2021-08-13 21:12:38.929119 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:5" took too long (3.62341689s) to execute
	2021-08-13 21:12:39.059193 W | etcdserver: read-only range request "key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" " with result "range_response_count:1 size:2567" took too long (124.844917ms) to execute
	2021-08-13 21:12:39.066354 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-xv8fc.169af9fd27862677\" " with result "range_response_count:1 size:552" took too long (124.642984ms) to execute
	2021-08-13 21:12:39.066934 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (100.501941ms) to execute
	2021-08-13 21:12:40.045113 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (655.255821ms) to execute
	2021-08-13 21:12:40.046124 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:293" took too long (969.356868ms) to execute
	2021-08-13 21:12:40.048215 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/\" range_end:\"/registry/events/kubernetes-dashboard0\" " with result "range_response_count:17 size:10109" took too long (965.808767ms) to execute
	2021-08-13 21:12:40.050694 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions\" range_end:\"/registry/controllerrevisiont\" count_only:true " with result "range_response_count:0 size:7" took too long (555.230569ms) to execute
	
	* 
	* ==> kernel <==
	*  21:12:46 up 8 min,  0 users,  load average: 1.12, 0.85, 0.40
	Linux old-k8s-version-20210813205952-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [489ae9807a474727d2123eda2b84c80e0a576501ea4b893c38a67c8b52eef98c] <==
	* Trace[1362351978]: [4.212998089s] [4.212971782s] About to write a response
	I0813 21:12:38.938596       1 trace.go:81] Trace[1878302692]: "GuaranteedUpdate etcd3: *core.Event" (started: 2021-08-13 21:12:36.614735396 +0000 UTC m=+106.740867101) (total time: 2.323843716s):
	Trace[1878302692]: [2.312248067s] [2.312248067s] initial value restored
	I0813 21:12:38.939194       1 trace.go:81] Trace[209371397]: "Patch /api/v1/namespaces/kube-system/events/metrics-server-8546d8b77b-xv8fc.169af9fd2786670d" (started: 2021-08-13 21:12:36.614424249 +0000 UTC m=+106.740555951) (total time: 2.324756028s):
	Trace[209371397]: [2.312562711s] [2.312337403s] About to apply patch
	I0813 21:12:39.211714       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:39.211929       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:40.051770       1 trace.go:81] Trace[121213177]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2021-08-13 21:12:39.072186474 +0000 UTC m=+109.198318208) (total time: 979.515663ms):
	Trace[121213177]: [978.518308ms] [978.502443ms] About to write a response
	I0813 21:12:40.055874       1 trace.go:81] Trace[1839725854]: "List /api/v1/namespaces/kubernetes-dashboard/events" (started: 2021-08-13 21:12:39.078921636 +0000 UTC m=+109.205053343) (total time: 976.936003ms):
	Trace[1839725854]: [976.739184ms] [976.662292ms] Listing from storage done
	I0813 21:12:40.212694       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:40.213742       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:41.214378       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:41.214877       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:42.215133       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:42.216170       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:43.217008       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:43.217275       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:44.219309       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:44.219622       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:45.220125       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:45.220336       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 21:12:46.220987       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 21:12:46.221326       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [dba11fd3e73c724d1c00b0e83c52bad6f1cf2458aababb4d095894d058cab689] <==
	* I0813 21:11:19.012299       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.012748       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.013144       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"00a9f8a1-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.045861       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.046265       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.100810       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.100884       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.100930       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.100971       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"00a9f8a1-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.127229       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.127289       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"00a9f8a1-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.140058       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.140116       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.221212       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:11:19.221309       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.221445       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"00a9f8a1-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.221538       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:11:19.420026       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"004dbe06-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-xv8fc
	I0813 21:11:19.553426       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"00a9f8a1-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-dws7v
	I0813 21:11:19.570880       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"00b894e0-fc7b-11eb-a3a8-525400553b5e", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-7pkrv
	E0813 21:11:45.043383       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:11:47.598231       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 21:12:15.296877       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 21:12:19.601383       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 21:12:45.550065       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [404793ed86da645f2d7dd676537262cdc1ee03ceb8ee7b455cddd6e89db729d0] <==
	* W0813 21:11:16.532298       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 21:11:16.601905       1 server_others.go:148] Using iptables Proxier.
	I0813 21:11:16.602135       1 server_others.go:178] Tearing down inactive rules.
	E0813 21:11:16.718160       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0813 21:11:16.873112       1 server.go:555] Version: v1.14.0
	I0813 21:11:16.883302       1 config.go:202] Starting service config controller
	I0813 21:11:16.883337       1 config.go:102] Starting endpoints config controller
	I0813 21:11:16.883593       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 21:11:16.883594       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 21:11:16.986783       1 controller_utils.go:1034] Caches are synced for service config controller
	I0813 21:11:16.987750       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-scheduler [087dc8434df74488bdb5d54ac93f7608f3e4f6d66e01cae27b7e707728ce684e] <==
	* W0813 21:10:51.144977       1 authentication.go:55] Authentication is disabled
	I0813 21:10:51.145154       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 21:10:51.147327       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 21:10:55.622200       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:10:55.622789       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:10:55.622837       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:55.622878       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:10:55.622920       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:10:55.622961       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:10:55.623010       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:10:55.625243       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:10:55.625292       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:10:55.632202       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:10:56.630420       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:10:56.632544       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:10:56.632606       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:10:56.632646       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:10:56.632979       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:10:56.635371       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:10:56.638899       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:10:56.641800       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:10:56.643286       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:10:56.645027       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0813 21:10:58.450867       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 21:10:58.551380       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:04:37 UTC, end at Fri 2021-08-13 21:12:46 UTC. --
	Aug 13 21:11:20 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:20.933069    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:11:29 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:29.092415    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:11:30 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:30.098449    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:11:32 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:32.623186    6274 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:32 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:32.623235    6274 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:32 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:32.623305    6274 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:32 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:32.623347    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 13 21:11:36 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:36.346701    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:11:47 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:47.610534    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:11:48 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:48.194017    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:11:56 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:56.342969    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:11:58 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:58.626294    6274 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:58 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:58.628308    6274 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:58 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:58.628612    6274 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:11:58 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:11:58.628799    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 13 21:12:11 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:11.611440    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:12:12 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:12.280912    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:12:16 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:16.343871    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:12:24 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:24.609990    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:12:29 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:29.609111    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:12:36 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:36.611262    6274 pod_workers.go:190] Error syncing pod 0111f547-fc7b-11eb-a3a8-525400553b5e ("metrics-server-8546d8b77b-xv8fc_kube-system(0111f547-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:12:40 old-k8s-version-20210813205952-393438 kubelet[6274]: E0813 21:12:40.612285    6274 pod_workers.go:190] Error syncing pod 0127056f-fc7b-11eb-a3a8-525400553b5e ("dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-dws7v_kubernetes-dashboard(0127056f-fc7b-11eb-a3a8-525400553b5e)"
	Aug 13 21:12:40 old-k8s-version-20210813205952-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:12:40 old-k8s-version-20210813205952-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:12:40 old-k8s-version-20210813205952-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [06a5d5f9669ef6a996fc19248f647d87082e6700a5bfbe4493d45a11d0635acd] <==
	* 2021/08/13 21:11:20 Using namespace: kubernetes-dashboard
	2021/08/13 21:11:20 Using in-cluster config to connect to apiserver
	2021/08/13 21:11:20 Using secret token for csrf signing
	2021/08/13 21:11:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:11:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:11:20 Successful initial request to the apiserver, version: v1.14.0
	2021/08/13 21:11:20 Generating JWE encryption key
	2021/08/13 21:11:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:11:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:11:21 Initializing JWE encryption key from synchronized object
	2021/08/13 21:11:21 Creating in-cluster Sidecar client
	2021/08/13 21:11:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:11:21 Serving insecurely on HTTP port: 9090
	2021/08/13 21:11:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:12:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:11:20 Starting overwatch
	
	* 
	* ==> storage-provisioner [7274a6c3c6c7ced5146590bd6807f900c384e66aad5ad6ccb5f07893cc8bcc00] <==
	* I0813 21:11:20.119046       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 21:11:20.157246       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 21:11:20.158927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 21:11:20.187615       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 21:11:20.188690       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00868f7b-fc7b-11eb-a3a8-525400553b5e", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210813205952-393438_96b1e176-50b2-43b0-adaf-0d89063ffb0c became leader
	I0813 21:11:20.191702       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813205952-393438_96b1e176-50b2-43b0-adaf-0d89063ffb0c!
	I0813 21:11:20.294070       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813205952-393438_96b1e176-50b2-43b0-adaf-0d89063ffb0c!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813205952-393438 -n old-k8s-version-20210813205952-393438
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813205952-393438 -n old-k8s-version-20210813205952-393438: exit status 2 (275.571823ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210813205952-393438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-xv8fc
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210813205952-393438 describe pod metrics-server-8546d8b77b-xv8fc
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210813205952-393438 describe pod metrics-server-8546d8b77b-xv8fc: exit status 1 (110.348973ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-xv8fc" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210813205952-393438 describe pod metrics-server-8546d8b77b-xv8fc: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (26.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210813210115-393438 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-20210813210115-393438 --alsologtostderr -v=1: exit status 80 (2.77172138s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20210813210115-393438 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 21:13:44.245117  437298 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:13:44.245333  437298 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:13:44.245346  437298 out.go:311] Setting ErrFile to fd 2...
	I0813 21:13:44.245351  437298 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:13:44.245503  437298 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:13:44.245724  437298 out.go:305] Setting JSON to false
	I0813 21:13:44.245748  437298 mustload.go:65] Loading cluster: embed-certs-20210813210115-393438
	I0813 21:13:44.246174  437298 config.go:177] Loaded profile config "embed-certs-20210813210115-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:13:44.246749  437298 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:44.246802  437298 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:44.260215  437298 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46661
	I0813 21:13:44.260925  437298 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:44.261612  437298 main.go:130] libmachine: Using API Version  1
	I0813 21:13:44.261643  437298 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:44.262087  437298 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:44.262257  437298 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetState
	I0813 21:13:44.265943  437298 host.go:66] Checking if "embed-certs-20210813210115-393438" exists ...
	I0813 21:13:44.266404  437298 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:44.266450  437298 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:44.279956  437298 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0813 21:13:44.280466  437298 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:44.281070  437298 main.go:130] libmachine: Using API Version  1
	I0813 21:13:44.281088  437298 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:44.281648  437298 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:44.281795  437298 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:13:44.282336  437298 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-20210813210115-393438 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 21:13:44.284469  437298 out.go:177] * Pausing node embed-certs-20210813210115-393438 ... 
	I0813 21:13:44.284495  437298 host.go:66] Checking if "embed-certs-20210813210115-393438" exists ...
	I0813 21:13:44.284953  437298 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:44.285006  437298 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:44.297491  437298 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38261
	I0813 21:13:44.298014  437298 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:44.298561  437298 main.go:130] libmachine: Using API Version  1
	I0813 21:13:44.298587  437298 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:44.298963  437298 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:44.299163  437298 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:13:44.299421  437298 ssh_runner.go:149] Run: systemctl --version
	I0813 21:13:44.299449  437298 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:13:44.306078  437298 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:13:44.306538  437298 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:13:44.306566  437298 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:13:44.306881  437298 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:13:44.307046  437298 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:13:44.307221  437298 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:13:44.307372  437298 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:13:44.412034  437298 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:13:44.427424  437298 pause.go:50] kubelet running: true
	I0813 21:13:44.427505  437298 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:13:44.667742  437298 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:13:44.667842  437298 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:13:44.823154  437298 cri.go:76] found id: "f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c"
	I0813 21:13:44.823184  437298 cri.go:76] found id: "fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a"
	I0813 21:13:44.823191  437298 cri.go:76] found id: "f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b"
	I0813 21:13:44.823197  437298 cri.go:76] found id: "875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18"
	I0813 21:13:44.823202  437298 cri.go:76] found id: "699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef"
	I0813 21:13:44.823208  437298 cri.go:76] found id: "6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c"
	I0813 21:13:44.823214  437298 cri.go:76] found id: "7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69"
	I0813 21:13:44.823219  437298 cri.go:76] found id: "fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051"
	I0813 21:13:44.823224  437298 cri.go:76] found id: "34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a"
	I0813 21:13:44.823234  437298 cri.go:76] found id: "feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6"
	I0813 21:13:44.823239  437298 cri.go:76] found id: ""
	I0813 21:13:44.823288  437298 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:13:44.895767  437298 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d","pid":5625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d/rootfs","created":"2021-08-13T21:12:51.697767816Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210813210115-393438_1b15094484d1752ba00a51b8d8e33406"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301","pid":6676,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a65131b156
5eafea431ae76be272e95fc89120ac32d6caf78b000e809292301","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301/rootfs","created":"2021-08-13T21:13:26.680998168Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_fee29ac6-9f54-43d1-a6af-19999cf2f219"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f","pid":6801,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f/rootfs","created":"2021-08-13T21:13:27.091642027Z","annotations":{"io.kubernetes.cri.container-type":"s
andbox","io.kubernetes.cri.sandbox-id":"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-67f8b_dec6e08d-aa7f-4511-ae76-036fb08eb5f0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4","pid":5613,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4/rootfs","created":"2021-08-13T21:12:51.647779415Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210813210115-393438_10bf1c419beee447668448fc163bb6
1c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362","pid":5657,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362/rootfs","created":"2021-08-13T21:12:51.779636495Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210813210115-393438_9d08051b05b007306ea9cd2617e87a82"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7","pid":6152,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324
674c7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7/rootfs","created":"2021-08-13T21:13:21.994057468Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-lg5kj_bcceb77f-7a57-4461-a36d-bc56cd609030"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62","pid":6579,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62/rootfs","created":"2021-08-13T21:13:26.193680605Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f23ad2409b
33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-2bkk5_fc0c5961-f1c7-4e5b-8c73-ec11bcd71140"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef","pid":5814,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef/rootfs","created":"2021-08-13T21:12:52.968142517Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c","pid":5746,"status":"running","bundle":"/run/conta
inerd/io.containerd.runtime.v2.task/k8s.io/6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c/rootfs","created":"2021-08-13T21:12:52.636861166Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966","pid":6795,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966/rootfs","created":"2021-08-13T21:13:27.074789215Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sa
ndbox-id":"6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-vfmn7_92209727-a9d1-4943-a8c8-f0d00da0b005"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69","pid":5707,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69/rootfs","created":"2021-08-13T21:12:52.3666968Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18","pid":5812,"stat
us":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18/rootfs","created":"2021-08-13T21:12:52.961178551Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9","pid":6305,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9/rootfs","created":"2021-08-13T21:13:22.702224477Z","annotations":{"io.kubernete
s.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-g7rvp_31de3d17-fc0d-487b-9b5e-29a850447c11"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6","pid":5644,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6/rootfs","created":"2021-08-13T21:12:51.74680924Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210813210115-393438_f9121dc724f91dc09d5405c1b
2513d88"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c","pid":6845,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c/rootfs","created":"2021-08-13T21:13:27.509656862Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b","pid":6278,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f982e62a
b4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b/rootfs","created":"2021-08-13T21:13:22.453774861Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a","pid":6374,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a/rootfs","created":"2021-08-13T21:13:23.466246261Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f
eba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6","pid":6889,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6/rootfs","created":"2021-08-13T21:13:27.95191134Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966"},"owner":"root"}]
	I0813 21:13:44.896043  437298 cri.go:113] list returned 18 containers
	I0813 21:13:44.896062  437298 cri.go:116] container: {ID:11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d Status:running}
	I0813 21:13:44.896091  437298 cri.go:118] skipping 11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d - not in ps
	I0813 21:13:44.896102  437298 cri.go:116] container: {ID:1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301 Status:running}
	I0813 21:13:44.896115  437298 cri.go:118] skipping 1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301 - not in ps
	I0813 21:13:44.896125  437298 cri.go:116] container: {ID:2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f Status:running}
	I0813 21:13:44.896133  437298 cri.go:118] skipping 2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f - not in ps
	I0813 21:13:44.896143  437298 cri.go:116] container: {ID:2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4 Status:running}
	I0813 21:13:44.896150  437298 cri.go:118] skipping 2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4 - not in ps
	I0813 21:13:44.896163  437298 cri.go:116] container: {ID:3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362 Status:running}
	I0813 21:13:44.896172  437298 cri.go:118] skipping 3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362 - not in ps
	I0813 21:13:44.896179  437298 cri.go:116] container: {ID:386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7 Status:running}
	I0813 21:13:44.896192  437298 cri.go:118] skipping 386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7 - not in ps
	I0813 21:13:44.896199  437298 cri.go:116] container: {ID:3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62 Status:running}
	I0813 21:13:44.896207  437298 cri.go:118] skipping 3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62 - not in ps
	I0813 21:13:44.896213  437298 cri.go:116] container: {ID:699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef Status:running}
	I0813 21:13:44.896221  437298 cri.go:116] container: {ID:6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c Status:running}
	I0813 21:13:44.896240  437298 cri.go:116] container: {ID:6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966 Status:running}
	I0813 21:13:44.896252  437298 cri.go:118] skipping 6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966 - not in ps
	I0813 21:13:44.896262  437298 cri.go:116] container: {ID:7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69 Status:running}
	I0813 21:13:44.896277  437298 cri.go:116] container: {ID:875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18 Status:running}
	I0813 21:13:44.896285  437298 cri.go:116] container: {ID:bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9 Status:running}
	I0813 21:13:44.896292  437298 cri.go:118] skipping bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9 - not in ps
	I0813 21:13:44.896299  437298 cri.go:116] container: {ID:c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6 Status:running}
	I0813 21:13:44.896307  437298 cri.go:118] skipping c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6 - not in ps
	I0813 21:13:44.896316  437298 cri.go:116] container: {ID:f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c Status:running}
	I0813 21:13:44.896322  437298 cri.go:116] container: {ID:f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b Status:running}
	I0813 21:13:44.896331  437298 cri.go:116] container: {ID:fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a Status:running}
	I0813 21:13:44.896337  437298 cri.go:116] container: {ID:feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6 Status:running}
	I0813 21:13:44.896388  437298 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef
	I0813 21:13:44.926746  437298 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef 6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c
	I0813 21:13:44.952680  437298 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef 6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:13:44Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:13:45.229134  437298 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:13:45.246121  437298 pause.go:50] kubelet running: false
	I0813 21:13:45.246184  437298 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:13:45.509277  437298 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:13:45.509383  437298 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:13:45.742106  437298 cri.go:76] found id: "f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c"
	I0813 21:13:45.742148  437298 cri.go:76] found id: "fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a"
	I0813 21:13:45.742155  437298 cri.go:76] found id: "f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b"
	I0813 21:13:45.742162  437298 cri.go:76] found id: "875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18"
	I0813 21:13:45.742167  437298 cri.go:76] found id: "699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef"
	I0813 21:13:45.742173  437298 cri.go:76] found id: "6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c"
	I0813 21:13:45.742178  437298 cri.go:76] found id: "7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69"
	I0813 21:13:45.742183  437298 cri.go:76] found id: "fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051"
	I0813 21:13:45.742189  437298 cri.go:76] found id: "34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a"
	I0813 21:13:45.742199  437298 cri.go:76] found id: "feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6"
	I0813 21:13:45.742206  437298 cri.go:76] found id: ""
	I0813 21:13:45.742287  437298 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:13:45.801232  437298 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d","pid":5625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d/rootfs","created":"2021-08-13T21:12:51.697767816Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210813210115-393438_1b15094484d1752ba00a51b8d8e33406"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301","pid":6676,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a65131b156
5eafea431ae76be272e95fc89120ac32d6caf78b000e809292301","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301/rootfs","created":"2021-08-13T21:13:26.680998168Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_fee29ac6-9f54-43d1-a6af-19999cf2f219"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f","pid":6801,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f/rootfs","created":"2021-08-13T21:13:27.091642027Z","annotations":{"io.kubernetes.cri.container-type":"s
andbox","io.kubernetes.cri.sandbox-id":"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-67f8b_dec6e08d-aa7f-4511-ae76-036fb08eb5f0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4","pid":5613,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4/rootfs","created":"2021-08-13T21:12:51.647779415Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210813210115-393438_10bf1c419beee447668448fc163bb6
1c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362","pid":5657,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362/rootfs","created":"2021-08-13T21:12:51.779636495Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210813210115-393438_9d08051b05b007306ea9cd2617e87a82"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7","pid":6152,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324
674c7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7/rootfs","created":"2021-08-13T21:13:21.994057468Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-lg5kj_bcceb77f-7a57-4461-a36d-bc56cd609030"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62","pid":6579,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62/rootfs","created":"2021-08-13T21:13:26.193680605Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f23ad2409b
33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-2bkk5_fc0c5961-f1c7-4e5b-8c73-ec11bcd71140"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef","pid":5814,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef/rootfs","created":"2021-08-13T21:12:52.968142517Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c","pid":5746,"status":"running","bundle":"/run/contai
nerd/io.containerd.runtime.v2.task/k8s.io/6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c/rootfs","created":"2021-08-13T21:12:52.636861166Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966","pid":6795,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966/rootfs","created":"2021-08-13T21:13:27.074789215Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.san
dbox-id":"6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-vfmn7_92209727-a9d1-4943-a8c8-f0d00da0b005"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69","pid":5707,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69/rootfs","created":"2021-08-13T21:12:52.3666968Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18","pid":5812,"statu
s":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18/rootfs","created":"2021-08-13T21:12:52.961178551Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9","pid":6305,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9/rootfs","created":"2021-08-13T21:13:22.702224477Z","annotations":{"io.kubernetes
.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-g7rvp_31de3d17-fc0d-487b-9b5e-29a850447c11"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6","pid":5644,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6/rootfs","created":"2021-08-13T21:12:51.74680924Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210813210115-393438_f9121dc724f91dc09d5405c1b2
513d88"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c","pid":6845,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c/rootfs","created":"2021-08-13T21:13:27.509656862Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b","pid":6278,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f982e62ab
4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b/rootfs","created":"2021-08-13T21:13:22.453774861Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a","pid":6374,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a/rootfs","created":"2021-08-13T21:13:23.466246261Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fe
ba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6","pid":6889,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6/rootfs","created":"2021-08-13T21:13:27.95191134Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966"},"owner":"root"}]
	I0813 21:13:45.801505  437298 cri.go:113] list returned 18 containers
	I0813 21:13:45.801525  437298 cri.go:116] container: {ID:11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d Status:running}
	I0813 21:13:45.801541  437298 cri.go:118] skipping 11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d - not in ps
	I0813 21:13:45.801548  437298 cri.go:116] container: {ID:1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301 Status:running}
	I0813 21:13:45.801556  437298 cri.go:118] skipping 1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301 - not in ps
	I0813 21:13:45.801567  437298 cri.go:116] container: {ID:2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f Status:running}
	I0813 21:13:45.801574  437298 cri.go:118] skipping 2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f - not in ps
	I0813 21:13:45.801580  437298 cri.go:116] container: {ID:2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4 Status:running}
	I0813 21:13:45.801588  437298 cri.go:118] skipping 2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4 - not in ps
	I0813 21:13:45.801594  437298 cri.go:116] container: {ID:3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362 Status:running}
	I0813 21:13:45.801602  437298 cri.go:118] skipping 3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362 - not in ps
	I0813 21:13:45.801609  437298 cri.go:116] container: {ID:386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7 Status:running}
	I0813 21:13:45.801619  437298 cri.go:118] skipping 386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7 - not in ps
	I0813 21:13:45.801626  437298 cri.go:116] container: {ID:3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62 Status:running}
	I0813 21:13:45.801637  437298 cri.go:118] skipping 3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62 - not in ps
	I0813 21:13:45.801644  437298 cri.go:116] container: {ID:699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef Status:paused}
	I0813 21:13:45.801654  437298 cri.go:122] skipping {699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef paused}: state = "paused", want "running"
	I0813 21:13:45.801671  437298 cri.go:116] container: {ID:6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c Status:running}
	I0813 21:13:45.801679  437298 cri.go:116] container: {ID:6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966 Status:running}
	I0813 21:13:45.801688  437298 cri.go:118] skipping 6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966 - not in ps
	I0813 21:13:45.801706  437298 cri.go:116] container: {ID:7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69 Status:running}
	I0813 21:13:45.801715  437298 cri.go:116] container: {ID:875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18 Status:running}
	I0813 21:13:45.801725  437298 cri.go:116] container: {ID:bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9 Status:running}
	I0813 21:13:45.801734  437298 cri.go:118] skipping bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9 - not in ps
	I0813 21:13:45.801741  437298 cri.go:116] container: {ID:c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6 Status:running}
	I0813 21:13:45.801749  437298 cri.go:118] skipping c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6 - not in ps
	I0813 21:13:45.801756  437298 cri.go:116] container: {ID:f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c Status:running}
	I0813 21:13:45.801764  437298 cri.go:116] container: {ID:f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b Status:running}
	I0813 21:13:45.801773  437298 cri.go:116] container: {ID:fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a Status:running}
	I0813 21:13:45.801783  437298 cri.go:116] container: {ID:feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6 Status:running}
	I0813 21:13:45.801831  437298 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c
	I0813 21:13:45.828627  437298 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c 7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69
	I0813 21:13:45.852589  437298 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c 7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:13:45Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:13:46.393125  437298 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:13:46.408910  437298 pause.go:50] kubelet running: false
	I0813 21:13:46.408973  437298 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:13:46.641464  437298 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:13:46.641551  437298 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:13:46.817892  437298 cri.go:76] found id: "f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c"
	I0813 21:13:46.817935  437298 cri.go:76] found id: "fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a"
	I0813 21:13:46.817942  437298 cri.go:76] found id: "f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b"
	I0813 21:13:46.817950  437298 cri.go:76] found id: "875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18"
	I0813 21:13:46.817956  437298 cri.go:76] found id: "699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef"
	I0813 21:13:46.817962  437298 cri.go:76] found id: "6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c"
	I0813 21:13:46.817968  437298 cri.go:76] found id: "7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69"
	I0813 21:13:46.817973  437298 cri.go:76] found id: "fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051"
	I0813 21:13:46.817978  437298 cri.go:76] found id: "34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a"
	I0813 21:13:46.817988  437298 cri.go:76] found id: "feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6"
	I0813 21:13:46.817997  437298 cri.go:76] found id: ""
	I0813 21:13:46.818052  437298 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:13:46.876974  437298 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d","pid":5625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d/rootfs","created":"2021-08-13T21:12:51.697767816Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210813210115-393438_1b15094484d1752ba00a51b8d8e33406"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301","pid":6676,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a65131b156
5eafea431ae76be272e95fc89120ac32d6caf78b000e809292301","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301/rootfs","created":"2021-08-13T21:13:26.680998168Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_fee29ac6-9f54-43d1-a6af-19999cf2f219"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f","pid":6801,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f/rootfs","created":"2021-08-13T21:13:27.091642027Z","annotations":{"io.kubernetes.cri.container-type":"s
andbox","io.kubernetes.cri.sandbox-id":"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-67f8b_dec6e08d-aa7f-4511-ae76-036fb08eb5f0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4","pid":5613,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4/rootfs","created":"2021-08-13T21:12:51.647779415Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210813210115-393438_10bf1c419beee447668448fc163bb6
1c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362","pid":5657,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362/rootfs","created":"2021-08-13T21:12:51.779636495Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210813210115-393438_9d08051b05b007306ea9cd2617e87a82"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7","pid":6152,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324
674c7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7/rootfs","created":"2021-08-13T21:13:21.994057468Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-lg5kj_bcceb77f-7a57-4461-a36d-bc56cd609030"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62","pid":6579,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62/rootfs","created":"2021-08-13T21:13:26.193680605Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f23ad2409b
33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-2bkk5_fc0c5961-f1c7-4e5b-8c73-ec11bcd71140"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef","pid":5814,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef/rootfs","created":"2021-08-13T21:12:52.968142517Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c","pid":5746,"status":"paused","bundle":"/run/contain
erd/io.containerd.runtime.v2.task/k8s.io/6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c/rootfs","created":"2021-08-13T21:12:52.636861166Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966","pid":6795,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966/rootfs","created":"2021-08-13T21:13:27.074789215Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sand
box-id":"6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-vfmn7_92209727-a9d1-4943-a8c8-f0d00da0b005"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69","pid":5707,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69/rootfs","created":"2021-08-13T21:12:52.3666968Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18","pid":5812,"status
":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18/rootfs","created":"2021-08-13T21:12:52.961178551Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9","pid":6305,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9/rootfs","created":"2021-08-13T21:13:22.702224477Z","annotations":{"io.kubernetes.
cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-g7rvp_31de3d17-fc0d-487b-9b5e-29a850447c11"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6","pid":5644,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6/rootfs","created":"2021-08-13T21:12:51.74680924Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210813210115-393438_f9121dc724f91dc09d5405c1b25
13d88"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c","pid":6845,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c/rootfs","created":"2021-08-13T21:13:27.509656862Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b","pid":6278,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f982e62ab4
f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b/rootfs","created":"2021-08-13T21:13:22.453774861Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a","pid":6374,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a/rootfs","created":"2021-08-13T21:13:23.466246261Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feb
a03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6","pid":6889,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6/rootfs","created":"2021-08-13T21:13:27.95191134Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966"},"owner":"root"}]
	I0813 21:13:46.877209  437298 cri.go:113] list returned 18 containers
	I0813 21:13:46.877226  437298 cri.go:116] container: {ID:11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d Status:running}
	I0813 21:13:46.877242  437298 cri.go:118] skipping 11f1b554ea9b0722ff16d92b69ae36ddcbcc4269b4cb427b84bc53814059770d - not in ps
	I0813 21:13:46.877248  437298 cri.go:116] container: {ID:1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301 Status:running}
	I0813 21:13:46.877257  437298 cri.go:118] skipping 1a65131b1565eafea431ae76be272e95fc89120ac32d6caf78b000e809292301 - not in ps
	I0813 21:13:46.877265  437298 cri.go:116] container: {ID:2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f Status:running}
	I0813 21:13:46.877273  437298 cri.go:118] skipping 2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f - not in ps
	I0813 21:13:46.877281  437298 cri.go:116] container: {ID:2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4 Status:running}
	I0813 21:13:46.877291  437298 cri.go:118] skipping 2bed3213c62a11542314564e9b30a604db5401fd06d48547aff0233f0b4f22a4 - not in ps
	I0813 21:13:46.877296  437298 cri.go:116] container: {ID:3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362 Status:running}
	I0813 21:13:46.877302  437298 cri.go:118] skipping 3096fb97d92c5399f5196c3ca8d979341e22ad94d1ce685bcef868c99b520362 - not in ps
	I0813 21:13:46.877308  437298 cri.go:116] container: {ID:386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7 Status:running}
	I0813 21:13:46.877315  437298 cri.go:118] skipping 386546988b9ed11931a602e7597ed937395a11ad41ba77d33603d66f324674c7 - not in ps
	I0813 21:13:46.877321  437298 cri.go:116] container: {ID:3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62 Status:running}
	I0813 21:13:46.877328  437298 cri.go:118] skipping 3f23ad2409b33441ec997a4cc87fea54f23fde833efebe9dcb9a92bb14a6fe62 - not in ps
	I0813 21:13:46.877334  437298 cri.go:116] container: {ID:699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef Status:paused}
	I0813 21:13:46.877345  437298 cri.go:122] skipping {699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef paused}: state = "paused", want "running"
	I0813 21:13:46.877365  437298 cri.go:116] container: {ID:6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c Status:paused}
	I0813 21:13:46.877374  437298 cri.go:122] skipping {6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c paused}: state = "paused", want "running"
	I0813 21:13:46.877384  437298 cri.go:116] container: {ID:6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966 Status:running}
	I0813 21:13:46.877393  437298 cri.go:118] skipping 6c02d5e3e4d66e98aaacce1a86078aa83a9b011ab019c6c9fc246ef9883d4966 - not in ps
	I0813 21:13:46.877401  437298 cri.go:116] container: {ID:7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69 Status:running}
	I0813 21:13:46.877411  437298 cri.go:116] container: {ID:875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18 Status:running}
	I0813 21:13:46.877420  437298 cri.go:116] container: {ID:bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9 Status:running}
	I0813 21:13:46.877428  437298 cri.go:118] skipping bf382a6a87c89d76fed1fe47a18a0a22f1b4f222a794ce3ebc324d7138f4d5f9 - not in ps
	I0813 21:13:46.877437  437298 cri.go:116] container: {ID:c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6 Status:running}
	I0813 21:13:46.877445  437298 cri.go:118] skipping c8063b597e446646ab1be2a6177af6feb9a713d6a1aa133a1e75627d3207fab6 - not in ps
	I0813 21:13:46.877453  437298 cri.go:116] container: {ID:f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c Status:running}
	I0813 21:13:46.877461  437298 cri.go:116] container: {ID:f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b Status:running}
	I0813 21:13:46.877467  437298 cri.go:116] container: {ID:fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a Status:running}
	I0813 21:13:46.877477  437298 cri.go:116] container: {ID:feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6 Status:running}
	I0813 21:13:46.877527  437298 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69
	I0813 21:13:46.900626  437298 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69 875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18
	I0813 21:13:46.934800  437298 out.go:177] 
	W0813 21:13:46.935030  437298 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69 875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:13:46Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69 875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:13:46Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 21:13:46.935047  437298 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 21:13:46.939574  437298 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 21:13:46.941194  437298 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p embed-certs-20210813210115-393438 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813210115-393438 -n embed-certs-20210813210115-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813210115-393438 -n embed-certs-20210813210115-393438: exit status 2 (293.972351ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210813210115-393438 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p embed-certs-20210813210115-393438 logs -n 25: exit status 110 (11.429470813s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                               | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:04:27 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:18 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:04:51 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:28 UTC | Fri, 13 Aug 2021 21:05:01 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                  |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:52 UTC | Fri, 13 Aug 2021 21:11:53 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                  | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:56 UTC | Fri, 13 Aug 2021 21:11:57 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:11:59 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                  | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:58 UTC | Fri, 13 Aug 2021 21:12:00 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:01 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:02 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                  |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:13 UTC | Fri, 13 Aug 2021 21:12:13 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:12:23 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:40 UTC | Fri, 13 Aug 2021 21:12:40 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:40 UTC | Fri, 13 Aug 2021 21:12:41 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:41 UTC | Fri, 13 Aug 2021 21:12:41 UTC |
	|         | default-k8s-different-port-20210813210121-393438  |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205952-393438             | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:43 UTC | Fri, 13 Aug 2021 21:12:44 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205952-393438             | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:45 UTC | Fri, 13 Aug 2021 21:12:46 UTC |
	|         | logs -n 25                                        |                                                  |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:47 UTC | Fri, 13 Aug 2021 21:12:48 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:48 UTC | Fri, 13 Aug 2021 21:12:48 UTC |
	|         | old-k8s-version-20210813205952-393438             |                                                  |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:13:29 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                  |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:43 UTC | Fri, 13 Aug 2021 21:13:44 UTC |
	|         | embed-certs-20210813210115-393438                 |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                  |         |         |                               |                               |
	|---------|---------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:12:48
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:12:48.896742  436805 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:12:48.896815  436805 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:48.896825  436805 out.go:311] Setting ErrFile to fd 2...
	I0813 21:12:48.896828  436805 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:12:48.896930  436805 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:12:48.897183  436805 out.go:305] Setting JSON to false
	I0813 21:12:48.932420  436805 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6931,"bootTime":1628882238,"procs":200,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:12:48.932534  436805 start.go:121] virtualization: kvm guest
	I0813 21:12:48.934984  436805 out.go:177] * [cilium-20210813205926-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:12:48.936398  436805 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:12:48.935130  436805 notify.go:169] Checking for updates...
	I0813 21:12:48.937752  436805 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:12:48.939050  436805 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:12:48.940283  436805 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:12:48.940739  436805 config.go:177] Loaded profile config "auto-20210813205925-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:12:48.940844  436805 config.go:177] Loaded profile config "embed-certs-20210813210115-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:12:48.940941  436805 config.go:177] Loaded profile config "newest-cni-20210813211202-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:12:48.940983  436805 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:12:48.971332  436805 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 21:12:48.971365  436805 start.go:278] selected driver: kvm2
	I0813 21:12:48.971372  436805 start.go:751] validating driver "kvm2" against <nil>
	I0813 21:12:48.971389  436805 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:12:48.972492  436805 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:12:48.972647  436805 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:12:48.983950  436805 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:12:48.984010  436805 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 21:12:48.984200  436805 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 21:12:48.984232  436805 cni.go:93] Creating CNI manager for "cilium"
	I0813 21:12:48.984243  436805 start_flags.go:272] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0813 21:12:48.984258  436805 start_flags.go:277] config:
	{Name:cilium-20210813205926-393438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813205926-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:12:48.984446  436805 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:12:47.261935  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:12:47.262612  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | unable to find current IP address of domain auto-20210813205925-393438 in network mk-auto-20210813205925-393438
	I0813 21:12:47.262644  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | I0813 21:12:47.262566  436318 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 21:12:48.251299  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:12:48.251846  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | unable to find current IP address of domain auto-20210813205925-393438 in network mk-auto-20210813205925-393438
	I0813 21:12:48.251880  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | I0813 21:12:48.251783  436318 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 21:12:49.443604  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:12:49.444037  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | unable to find current IP address of domain auto-20210813205925-393438 in network mk-auto-20210813205925-393438
	I0813 21:12:49.444073  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | I0813 21:12:49.443975  436318 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 21:12:51.123908  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:12:51.124477  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | unable to find current IP address of domain auto-20210813205925-393438 in network mk-auto-20210813205925-393438
	I0813 21:12:51.124512  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | I0813 21:12:51.124414  436318 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 21:12:48.986195  436805 out.go:177] * Starting control plane node cilium-20210813205926-393438 in cluster cilium-20210813205926-393438
	I0813 21:12:48.986216  436805 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:12:48.986244  436805 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 21:12:48.986267  436805 cache.go:56] Caching tarball of preloaded images
	I0813 21:12:48.986359  436805 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 21:12:48.986377  436805 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 21:12:48.986481  436805 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/config.json ...
	I0813 21:12:48.986506  436805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/config.json: {Name:mk9b313eaad93b7bc6348121417a645c135a861f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:12:48.986655  436805 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:12:48.986725  436805 start.go:313] acquiring machines lock for cilium-20210813205926-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:12:53.220270  435569 out.go:204]   - Generating certificates and keys ...
	I0813 21:12:53.471917  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:12:53.472568  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | unable to find current IP address of domain auto-20210813205925-393438 in network mk-auto-20210813205925-393438
	I0813 21:12:53.472609  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | I0813 21:12:53.472501  436318 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 21:12:56.840319  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:12:56.840743  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | unable to find current IP address of domain auto-20210813205925-393438 in network mk-auto-20210813205925-393438
	I0813 21:12:56.840775  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | I0813 21:12:56.840680  436318 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 21:12:55.529034  435569 out.go:204]   - Booting up control plane ...
	I0813 21:13:01.167833  436805 start.go:317] acquired machines lock for "cilium-20210813205926-393438" in 12.1810837s
	I0813 21:13:01.167955  436805 start.go:89] Provisioning new machine with config: &{Name:cilium-20210813205926-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.21.3 ClusterName:cilium-20210813205926-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:13:01.168080  436805 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 21:12:59.962150  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:12:59.962562  436296 main.go:130] libmachine: (auto-20210813205925-393438) Found IP for machine: 192.168.39.95
	I0813 21:12:59.962587  436296 main.go:130] libmachine: (auto-20210813205925-393438) Reserving static IP address...
	I0813 21:12:59.962604  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has current primary IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:12:59.962908  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | unable to find host DHCP lease matching {name: "auto-20210813205925-393438", mac: "52:54:00:ef:83:5e", ip: "192.168.39.95"} in network mk-auto-20210813205925-393438
	I0813 21:13:00.012028  436296 main.go:130] libmachine: (auto-20210813205925-393438) Reserved static IP address: 192.168.39.95
	I0813 21:13:00.012061  436296 main.go:130] libmachine: (auto-20210813205925-393438) Waiting for SSH to be available...
	I0813 21:13:00.012089  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | Getting to WaitForSSH function...
	I0813 21:13:00.016976  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.017347  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:00.017399  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.017572  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | Using SSH client type: external
	I0813 21:13:00.017607  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813205925-393438/id_rsa (-rw-------)
	I0813 21:13:00.017644  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813205925-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:13:00.017661  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | About to run SSH command:
	I0813 21:13:00.017673  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | exit 0
	I0813 21:13:00.150185  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 21:13:00.150639  436296 main.go:130] libmachine: (auto-20210813205925-393438) KVM machine creation complete!
	I0813 21:13:00.150735  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetConfigRaw
	I0813 21:13:00.151348  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .DriverName
	I0813 21:13:00.151553  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .DriverName
	I0813 21:13:00.151721  436296 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 21:13:00.151757  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetState
	I0813 21:13:00.154584  436296 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 21:13:00.154599  436296 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 21:13:00.154605  436296 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 21:13:00.154611  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHHostname
	I0813 21:13:00.159653  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.159991  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:00.160021  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.160126  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHPort
	I0813 21:13:00.160273  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:00.160400  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:00.160508  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHUsername
	I0813 21:13:00.160651  436296 main.go:130] libmachine: Using SSH client type: native
	I0813 21:13:00.160876  436296 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0813 21:13:00.160891  436296 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 21:13:00.285776  436296 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:13:00.285803  436296 main.go:130] libmachine: Detecting the provisioner...
	I0813 21:13:00.285813  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHHostname
	I0813 21:13:00.291150  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.291517  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:00.291565  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.291643  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHPort
	I0813 21:13:00.291816  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:00.291962  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:00.292085  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHUsername
	I0813 21:13:00.292291  436296 main.go:130] libmachine: Using SSH client type: native
	I0813 21:13:00.292452  436296 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0813 21:13:00.292463  436296 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 21:13:00.419154  436296 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 21:13:00.419234  436296 main.go:130] libmachine: found compatible host: buildroot
	I0813 21:13:00.419245  436296 main.go:130] libmachine: Provisioning with buildroot...
	I0813 21:13:00.419253  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetMachineName
	I0813 21:13:00.419461  436296 buildroot.go:166] provisioning hostname "auto-20210813205925-393438"
	I0813 21:13:00.419490  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetMachineName
	I0813 21:13:00.419626  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHHostname
	I0813 21:13:00.424671  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.425014  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:00.425042  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.425208  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHPort
	I0813 21:13:00.425420  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:00.425596  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:00.425739  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHUsername
	I0813 21:13:00.425927  436296 main.go:130] libmachine: Using SSH client type: native
	I0813 21:13:00.426072  436296 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0813 21:13:00.426085  436296 main.go:130] libmachine: About to run SSH command:
	sudo hostname auto-20210813205925-393438 && echo "auto-20210813205925-393438" | sudo tee /etc/hostname
	I0813 21:13:00.562128  436296 main.go:130] libmachine: SSH cmd err, output: <nil>: auto-20210813205925-393438
	
	I0813 21:13:00.562157  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHHostname
	I0813 21:13:00.567169  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.567546  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:00.567595  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.567673  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHPort
	I0813 21:13:00.567821  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:00.568021  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:00.568193  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHUsername
	I0813 21:13:00.568360  436296 main.go:130] libmachine: Using SSH client type: native
	I0813 21:13:00.568524  436296 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0813 21:13:00.568552  436296 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210813205925-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210813205925-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210813205925-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:13:00.700890  436296 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:13:00.700916  436296 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:13:00.700935  436296 buildroot.go:174] setting up certificates
	I0813 21:13:00.700944  436296 provision.go:83] configureAuth start
	I0813 21:13:00.700952  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetMachineName
	I0813 21:13:00.701198  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetIP
	I0813 21:13:00.706494  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.706819  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:00.706852  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.707035  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHHostname
	I0813 21:13:00.711257  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.711576  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:00.711611  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.711735  436296 provision.go:138] copyHostCerts
	I0813 21:13:00.711797  436296 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:13:00.711811  436296 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:13:00.711870  436296 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:13:00.711970  436296 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:13:00.711981  436296 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:13:00.712008  436296 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 21:13:00.712071  436296 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:13:00.712079  436296 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:13:00.712097  436296 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:13:00.712159  436296 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.auto-20210813205925-393438 san=[192.168.39.95 192.168.39.95 localhost 127.0.0.1 minikube auto-20210813205925-393438]
	I0813 21:13:00.852728  436296 provision.go:172] copyRemoteCerts
	I0813 21:13:00.852791  436296 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:13:00.852828  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHHostname
	I0813 21:13:00.858877  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.859284  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:00.859319  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:00.859575  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHPort
	I0813 21:13:00.859824  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:00.860021  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHUsername
	I0813 21:13:00.860228  436296 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813205925-393438/id_rsa Username:docker}
	I0813 21:13:00.959466  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:13:00.979568  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0813 21:13:01.000476  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:13:01.019635  436296 provision.go:86] duration metric: configureAuth took 318.676187ms
	I0813 21:13:01.019672  436296 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:13:01.019853  436296 config.go:177] Loaded profile config "auto-20210813205925-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:13:01.019889  436296 main.go:130] libmachine: Checking connection to Docker...
	I0813 21:13:01.019903  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetURL
	I0813 21:13:01.023134  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | Using libvirt version 3000000
	I0813 21:13:01.028534  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.028893  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:01.028931  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.029081  436296 main.go:130] libmachine: Docker is up and running!
	I0813 21:13:01.029101  436296 main.go:130] libmachine: Reticulating splines...
	I0813 21:13:01.029110  436296 client.go:171] LocalClient.Create took 18.850753629s
	I0813 21:13:01.029132  436296 start.go:168] duration metric: libmachine.API.Create for "auto-20210813205925-393438" took 18.850818446s
	I0813 21:13:01.029147  436296 start.go:267] post-start starting for "auto-20210813205925-393438" (driver="kvm2")
	I0813 21:13:01.029157  436296 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:13:01.029180  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .DriverName
	I0813 21:13:01.029429  436296 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:13:01.029464  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHHostname
	I0813 21:13:01.034690  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.035098  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:01.035147  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.035327  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHPort
	I0813 21:13:01.035536  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:01.035729  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHUsername
	I0813 21:13:01.035901  436296 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813205925-393438/id_rsa Username:docker}
	I0813 21:13:01.130484  436296 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:13:01.137216  436296 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:13:01.137250  436296 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:13:01.137319  436296 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:13:01.137453  436296 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 21:13:01.137568  436296 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:13:01.144068  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:13:01.160544  436296 start.go:270] post-start completed in 131.378549ms
	I0813 21:13:01.160600  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetConfigRaw
	I0813 21:13:01.161197  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetIP
	I0813 21:13:01.166587  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.167017  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:01.167049  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.167488  436296 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/config.json ...
	I0813 21:13:01.167675  436296 start.go:129] duration metric: createHost completed in 19.005064316s
	I0813 21:13:01.167693  436296 start.go:80] releasing machines lock for "auto-20210813205925-393438", held for 19.005228407s
	I0813 21:13:01.167754  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .DriverName
	I0813 21:13:01.167951  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetIP
	I0813 21:13:01.173513  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.173934  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:01.173988  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.174138  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .DriverName
	I0813 21:13:01.174332  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .DriverName
	I0813 21:13:01.174921  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .DriverName
	I0813 21:13:01.175160  436296 ssh_runner.go:149] Run: systemctl --version
	I0813 21:13:01.175221  436296 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:13:01.175224  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHHostname
	I0813 21:13:01.175285  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHHostname
	I0813 21:13:01.183450  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.183889  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:01.183917  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.183935  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.184317  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHPort
	I0813 21:13:01.184534  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:01.184559  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:01.184581  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:01.184769  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHPort
	I0813 21:13:01.184848  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHUsername
	I0813 21:13:01.184949  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHKeyPath
	I0813 21:13:01.185020  436296 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813205925-393438/id_rsa Username:docker}
	I0813 21:13:01.185291  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetSSHUsername
	I0813 21:13:01.189232  436296 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/auto-20210813205925-393438/id_rsa Username:docker}
	I0813 21:13:01.286663  436296 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:13:01.286813  436296 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:13:01.203010  434502 out.go:204]   - Configuring RBAC rules ...
	I0813 21:13:02.241283  434502 cni.go:93] Creating CNI manager for ""
	I0813 21:13:02.241315  434502 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:13:02.243231  434502 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:13:02.243319  434502 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:13:02.252697  434502 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:13:02.279770  434502 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:13:02.279824  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=embed-certs-20210813210115-393438 minikube.k8s.io/updated_at=2021_08_13T21_13_02_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:02.279834  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:01.170203  436805 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 21:13:01.170398  436805 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:01.170457  436805 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:01.185861  436805 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0813 21:13:01.186345  436805 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:01.186975  436805 main.go:130] libmachine: Using API Version  1
	I0813 21:13:01.186995  436805 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:01.187344  436805 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:01.187487  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetMachineName
	I0813 21:13:01.187643  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .DriverName
	I0813 21:13:01.187805  436805 start.go:160] libmachine.API.Create for "cilium-20210813205926-393438" (driver="kvm2")
	I0813 21:13:01.187834  436805 client.go:168] LocalClient.Create starting
	I0813 21:13:01.187870  436805 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 21:13:01.187899  436805 main.go:130] libmachine: Decoding PEM data...
	I0813 21:13:01.187929  436805 main.go:130] libmachine: Parsing certificate...
	I0813 21:13:01.188098  436805 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 21:13:01.188132  436805 main.go:130] libmachine: Decoding PEM data...
	I0813 21:13:01.188155  436805 main.go:130] libmachine: Parsing certificate...
	I0813 21:13:01.188212  436805 main.go:130] libmachine: Running pre-create checks...
	I0813 21:13:01.188225  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .PreCreateCheck
	I0813 21:13:01.188536  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetConfigRaw
	I0813 21:13:01.189073  436805 main.go:130] libmachine: Creating machine...
	I0813 21:13:01.189097  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .Create
	I0813 21:13:01.189309  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Creating KVM machine...
	I0813 21:13:01.192261  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found existing default KVM network
	I0813 21:13:01.194444  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:01.194239  436917 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:48:1c:e8}}
	I0813 21:13:01.195409  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:01.195313  436917 network.go:240] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fc:46:2e}}
	I0813 21:13:01.197044  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:01.196941  436917 network.go:240] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:17:a6:3e}}
	I0813 21:13:01.198217  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:01.198127  436917 network.go:240] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:38:d6:76}}
	I0813 21:13:01.200823  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:01.200674  436917 network.go:288] reserving subnet 192.168.83.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.83.0:0xc000010020] misses:0}
	I0813 21:13:01.200860  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:01.200744  436917 network.go:235] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 21:13:01.236794  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | trying to create private KVM network mk-cilium-20210813205926-393438 192.168.83.0/24...
	I0813 21:13:01.519316  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | private KVM network mk-cilium-20210813205926-393438 192.168.83.0/24 created
	I0813 21:13:01.519356  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438 ...
	I0813 21:13:01.519377  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:01.519245  436917 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:13:01.519417  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 21:13:01.519440  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 21:13:01.900498  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:01.900346  436917 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438/id_rsa...
	I0813 21:13:01.996763  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:01.996608  436917 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438/cilium-20210813205926-393438.rawdisk...
	I0813 21:13:01.996828  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Writing magic tar header
	I0813 21:13:01.996852  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Writing SSH key tar header
	I0813 21:13:01.996874  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:01.996745  436917 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438 ...
	I0813 21:13:01.996915  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438
	I0813 21:13:01.996945  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438 (perms=drwx------)
	I0813 21:13:01.996965  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 21:13:01.996987  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:13:01.997003  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337
	I0813 21:13:01.997027  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 21:13:01.997051  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 21:13:01.997066  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 21:13:01.997080  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 21:13:01.997099  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 21:13:01.997111  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 21:13:01.997123  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Creating domain...
	I0813 21:13:01.997140  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins
	I0813 21:13:01.997152  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Checking permissions on dir: /home
	I0813 21:13:01.997163  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Skipping /home - not owner
	I0813 21:13:02.064202  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:52:f2:07 in network default
	I0813 21:13:02.064991  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:02.065024  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Ensuring networks are active...
	I0813 21:13:02.067521  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Ensuring network default is active
	I0813 21:13:02.067937  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Ensuring network mk-cilium-20210813205926-393438 is active
	I0813 21:13:02.068554  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Getting domain xml...
	I0813 21:13:02.070891  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Creating domain...
	I0813 21:13:02.483222  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Waiting to get IP...
	I0813 21:13:02.484193  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:02.484790  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:02.484826  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:02.484754  436917 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 21:13:02.749119  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:02.749674  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:02.749709  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:02.749613  436917 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 21:13:03.132168  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:03.132794  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:03.132831  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:03.132722  436917 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 21:13:03.557250  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:03.557790  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:03.557816  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:03.557736  436917 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 21:13:05.293457  436296 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.006613187s)
	I0813 21:13:05.293595  436296 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 21:13:05.293663  436296 ssh_runner.go:149] Run: which lz4
	I0813 21:13:05.298559  436296 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:13:05.303409  436296 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:13:05.303443  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0813 21:13:02.838863  434502 ops.go:34] apiserver oom_adj: -16
	I0813 21:13:02.838937  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:03.472191  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:03.972649  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:04.472749  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:04.972323  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:05.472747  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:05.972821  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:06.472940  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:06.972709  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:07.472129  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:04.032365  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:04.032913  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:04.032945  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:04.032856  436917 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 21:13:04.621592  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:04.622139  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:04.622171  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:04.622080  436917 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 21:13:05.457485  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:05.458044  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:05.458087  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:05.457980  436917 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 21:13:06.206577  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:06.207376  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:06.207412  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:06.207327  436917 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 21:13:07.195987  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:07.196506  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:07.196535  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:07.196461  436917 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 21:13:08.387880  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:08.388466  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:08.388503  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:08.388369  436917 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 21:13:08.953404  436296 containerd.go:546] Took 3.654888 seconds to copy over tarball
	I0813 21:13:08.953485  436296 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:13:07.973034  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:08.472890  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:08.972133  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:09.472654  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:09.972125  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:10.472174  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:10.972998  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:11.472054  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:11.972387  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:10.067453  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:10.067856  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:10.067890  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:10.067805  436917 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 21:13:12.415642  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:12.416191  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:12.416218  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:12.416130  436917 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 21:13:15.784148  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:18.382547  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find current IP address of domain cilium-20210813205926-393438 in network mk-cilium-20210813205926-393438
	I0813 21:13:18.382588  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | I0813 21:13:15.784652  436917 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 21:13:17.027737  436296 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (8.074225408s)
	I0813 21:13:18.382599  436296 containerd.go:553] Took 9.429158 seconds t extract the tarball
	I0813 21:13:18.382627  436296 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:13:18.469081  436296 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:13:18.682176  436296 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:13:18.736733  436296 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 21:13:18.782069  436296 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 21:13:18.798311  436296 docker.go:153] disabling docker service ...
	I0813 21:13:18.798370  436296 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:13:18.809795  436296 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:13:18.819907  436296 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:13:18.962789  436296 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:13:19.107267  436296 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:13:19.118823  436296 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:13:19.137525  436296 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 21:13:19.153481  436296 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:13:19.162548  436296 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:13:19.162598  436296 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:13:19.184747  436296 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:13:19.192484  436296 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:13:19.347938  436296 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:13:19.701278  436296 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 21:13:19.701352  436296 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:13:19.708888  436296 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 21:13:20.813885  436296 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:13:20.820768  436296 start.go:413] Will wait 60s for crictl version
	I0813 21:13:20.820886  436296 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:13:20.854003  436296 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 21:13:20.854074  436296 ssh_runner.go:149] Run: containerd --version
	I0813 21:13:20.883817  436296 ssh_runner.go:149] Run: containerd --version
	I0813 21:13:18.441388  434502 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (6.468964192s)
	I0813 21:13:18.472483  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:19.624820  434502 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.152295211s)
	I0813 21:13:19.972375  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:20.472653  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:20.972291  434502 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:21.129206  434502 kubeadm.go:985] duration metric: took 18.849438418s to wait for elevateKubeSystemPrivileges.
	I0813 21:13:21.129235  434502 kubeadm.go:392] StartCluster complete in 7m10.947897469s
	I0813 21:13:21.129256  434502 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:21.129377  434502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:13:21.130406  434502 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:21.670536  434502 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210813210115-393438" rescaled to 1
	I0813 21:13:21.670605  434502 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.72.95 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:13:21.672142  434502 out.go:177] * Verifying Kubernetes components...
	I0813 21:13:21.672214  434502 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:13:21.670699  434502 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:13:21.670701  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:13:21.670903  434502 config.go:177] Loaded profile config "embed-certs-20210813210115-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:13:21.672306  434502 addons.go:59] Setting dashboard=true in profile "embed-certs-20210813210115-393438"
	I0813 21:13:21.672326  434502 addons.go:135] Setting addon dashboard=true in "embed-certs-20210813210115-393438"
	I0813 21:13:21.672328  434502 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210813210115-393438"
	I0813 21:13:21.672354  434502 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210813210115-393438"
	W0813 21:13:21.672363  434502 addons.go:147] addon metrics-server should already be in state true
	I0813 21:13:21.672406  434502 host.go:66] Checking if "embed-certs-20210813210115-393438" exists ...
	W0813 21:13:21.672334  434502 addons.go:147] addon dashboard should already be in state true
	I0813 21:13:21.672452  434502 host.go:66] Checking if "embed-certs-20210813210115-393438" exists ...
	I0813 21:13:21.672307  434502 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210813210115-393438"
	I0813 21:13:21.672943  434502 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210813210115-393438"
	I0813 21:13:21.672945  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	W0813 21:13:21.672954  434502 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:13:21.672945  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:21.672979  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:21.672980  434502 host.go:66] Checking if "embed-certs-20210813210115-393438" exists ...
	I0813 21:13:21.673067  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:21.673261  434502 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210813210115-393438"
	I0813 21:13:21.673285  434502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210813210115-393438"
	I0813 21:13:21.673994  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:21.682497  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:21.682719  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:21.691129  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:21.691363  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0813 21:13:21.692691  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:21.693207  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:13:21.693232  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:21.693615  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:21.694163  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:21.694202  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:21.706803  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:32791
	I0813 21:13:21.707454  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:21.708018  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:13:21.708039  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:21.708482  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:21.709098  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:21.709140  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:21.715938  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41297
	I0813 21:13:21.716309  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:21.716759  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:13:21.716776  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:21.717111  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:21.717652  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:21.717690  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:21.726779  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35437
	I0813 21:13:21.726787  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0813 21:13:21.726844  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43287
	I0813 21:13:21.730841  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:21.730884  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:21.730960  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:21.731518  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:13:21.731536  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:21.731551  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:13:21.731567  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:21.731794  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:13:21.731811  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:21.731922  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:21.731929  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:21.732100  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetState
	I0813 21:13:21.732153  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetState
	I0813 21:13:21.732264  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:21.732463  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetState
	I0813 21:13:21.733940  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35817
	I0813 21:13:21.734376  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:21.734903  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:13:21.734924  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:21.735277  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:21.735429  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetState
	I0813 21:13:21.739558  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:13:21.742227  434502 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:13:21.743710  434502 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:13:21.743772  434502 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:13:21.743781  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:13:21.743801  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:13:21.742811  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:13:21.743684  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:13:21.746100  434502 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:13:21.746169  434502 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:13:20.927573  436296 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 21:13:20.927637  436296 main.go:130] libmachine: (auto-20210813205925-393438) Calling .GetIP
	I0813 21:13:20.933612  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:20.934070  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:83:5e", ip: ""} in network mk-auto-20210813205925-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:12:57 +0000 UTC Type:0 Mac:52:54:00:ef:83:5e Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:auto-20210813205925-393438 Clientid:01:52:54:00:ef:83:5e}
	I0813 21:13:20.934118  436296 main.go:130] libmachine: (auto-20210813205925-393438) DBG | domain auto-20210813205925-393438 has defined IP address 192.168.39.95 and MAC address 52:54:00:ef:83:5e in network mk-auto-20210813205925-393438
	I0813 21:13:20.934303  436296 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 21:13:20.939307  436296 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:13:20.951330  436296 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:13:20.951398  436296 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:13:20.984917  436296 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:13:20.984944  436296 containerd.go:517] Images already preloaded, skipping extraction
	I0813 21:13:20.984986  436296 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:13:21.017151  436296 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:13:21.017175  436296 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:13:21.017238  436296 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:13:21.049411  436296 cni.go:93] Creating CNI manager for ""
	I0813 21:13:21.049437  436296 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:13:21.049449  436296 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:13:21.049464  436296 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210813205925-393438 NodeName:auto-20210813205925-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:13:21.049620  436296 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "auto-20210813205925-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:13:21.049715  436296 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=auto-20210813205925-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.95 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:auto-20210813205925-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 21:13:21.049762  436296 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 21:13:21.057023  436296 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:13:21.057087  436296 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:13:21.064637  436296 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (541 bytes)
	I0813 21:13:21.076827  436296 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:13:21.089423  436296 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0813 21:13:21.104870  436296 ssh_runner.go:149] Run: grep 192.168.39.95	control-plane.minikube.internal$ /etc/hosts
	I0813 21:13:21.109343  436296 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:13:21.120780  436296 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438 for IP: 192.168.39.95
	I0813 21:13:21.120831  436296 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:13:21.120845  436296 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:13:21.120895  436296 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/client.key
	I0813 21:13:21.120908  436296 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/client.crt with IP's: []
	I0813 21:13:21.370582  436296 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/client.crt ...
	I0813 21:13:21.370617  436296 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/client.crt: {Name:mk5d67578de07b9b8d2ba1a44017851244be952e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:21.370842  436296 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/client.key ...
	I0813 21:13:21.370864  436296 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/client.key: {Name:mkb8e7469579419970e241b2c759e52997ea0e86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:21.370978  436296 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.key.48dc68e6
	I0813 21:13:21.370993  436296 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.crt.48dc68e6 with IP's: [192.168.39.95 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 21:13:21.601224  436296 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.crt.48dc68e6 ...
	I0813 21:13:21.601263  436296 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.crt.48dc68e6: {Name:mk8b33460171c4d024091b633f095d9f8e195584 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:21.601448  436296 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.key.48dc68e6 ...
	I0813 21:13:21.601466  436296 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.key.48dc68e6: {Name:mk97a917386b687ec97c112b5fb4a46c8f6fcfbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:21.601564  436296 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.crt.48dc68e6 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.crt
	I0813 21:13:21.601640  436296 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.key.48dc68e6 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.key
	I0813 21:13:21.601760  436296 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/proxy-client.key
	I0813 21:13:21.601774  436296 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/proxy-client.crt with IP's: []
	I0813 21:13:21.720825  436296 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/proxy-client.crt ...
	I0813 21:13:21.720895  436296 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/proxy-client.crt: {Name:mka8535be3677dae859a3b4a3faee93da3bb6981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:21.721160  436296 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/proxy-client.key ...
	I0813 21:13:21.721208  436296 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/proxy-client.key: {Name:mk734f4c31799470597aa7126bd1839ce8065057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:21.721531  436296 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:13:21.721605  436296 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:13:21.721622  436296 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:13:21.721665  436296 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:13:21.721698  436296 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:13:21.721730  436296 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:13:21.721793  436296 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:13:21.723162  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:13:21.750493  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 21:13:21.775354  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:13:21.798009  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/auto-20210813205925-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:13:21.819073  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:13:21.840113  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:13:21.860737  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:13:21.880940  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:13:21.899530  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:13:21.919645  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:13:21.937378  436296 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:13:21.955709  436296 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:13:21.969083  436296 ssh_runner.go:149] Run: openssl version
	I0813 21:13:21.977447  436296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:13:21.987293  436296 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:13:21.993207  436296 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:13:21.993252  436296 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:13:22.001014  436296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:13:22.256369  435569 out.go:204]   - Configuring RBAC rules ...
	I0813 21:13:21.751613  434502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:13:21.746182  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:13:21.751663  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:13:21.747272  434502 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210813210115-393438"
	W0813 21:13:21.751718  434502 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:13:21.751750  434502 host.go:66] Checking if "embed-certs-20210813210115-393438" exists ...
	I0813 21:13:21.751762  434502 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:13:21.751775  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:13:21.751795  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:13:21.750045  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:13:21.751970  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:13:21.752017  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:13:21.750810  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:13:21.752203  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:13:21.752276  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:21.752315  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:21.752359  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:13:21.752491  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:13:21.760203  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:13:21.760871  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:13:21.761247  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:13:21.761275  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:13:21.761459  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:13:21.761639  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:13:21.761669  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:13:21.761699  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:13:21.761822  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:13:21.761848  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:13:21.761924  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:13:21.762166  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:13:21.762319  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:13:21.762466  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:13:21.767139  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39617
	I0813 21:13:21.767616  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:21.768177  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:13:21.768203  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:21.768609  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:21.769194  434502 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:21.769246  434502 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:21.781765  434502 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33301
	I0813 21:13:21.782150  434502 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:21.782766  434502 main.go:130] libmachine: Using API Version  1
	I0813 21:13:21.782788  434502 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:21.783228  434502 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:21.783451  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetState
	I0813 21:13:21.786946  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .DriverName
	I0813 21:13:21.787156  434502 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:13:21.787173  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:13:21.787191  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHHostname
	I0813 21:13:21.793442  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:13:21.793913  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8f:97", ip: ""} in network mk-embed-certs-20210813210115-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:05:43 +0000 UTC Type:0 Mac:52:54:00:f7:8f:97 Iaid: IPaddr:192.168.72.95 Prefix:24 Hostname:embed-certs-20210813210115-393438 Clientid:01:52:54:00:f7:8f:97}
	I0813 21:13:21.793937  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | domain embed-certs-20210813210115-393438 has defined IP address 192.168.72.95 and MAC address 52:54:00:f7:8f:97 in network mk-embed-certs-20210813210115-393438
	I0813 21:13:21.794194  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHPort
	I0813 21:13:21.794360  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHKeyPath
	I0813 21:13:21.794521  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .GetSSHUsername
	I0813 21:13:21.794684  434502 sshutil.go:53] new ssh client: &{IP:192.168.72.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/embed-certs-20210813210115-393438/id_rsa Username:docker}
	I0813 21:13:22.047964  434502 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:13:22.047984  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:13:22.086740  434502 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:13:22.093051  434502 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210813210115-393438" to be "Ready" ...
	I0813 21:13:22.093119  434502 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:13:22.107229  434502 node_ready.go:49] node "embed-certs-20210813210115-393438" has status "Ready":"True"
	I0813 21:13:22.107254  434502 node_ready.go:38] duration metric: took 14.171352ms waiting for node "embed-certs-20210813210115-393438" to be "Ready" ...
	I0813 21:13:22.107268  434502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:13:22.122150  434502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-6pg65" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:22.141302  434502 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:13:22.141320  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:13:22.150144  434502 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:13:22.183150  434502 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:13:22.183175  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:13:22.254566  434502 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:13:22.254605  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:13:22.392702  434502 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:13:22.401602  434502 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:13:22.401630  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:13:22.508946  434502 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:13:22.508977  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:13:22.613797  434502 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:13:22.613827  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:13:22.011112  436296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:13:22.020851  436296 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:13:22.025591  436296 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:13:22.025641  436296 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:13:22.031652  436296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:13:22.039489  436296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:13:22.047855  436296 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:13:22.052925  436296 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:13:22.052973  436296 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:13:22.059054  436296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:13:22.068437  436296 kubeadm.go:390] StartCluster: {Name:auto-20210813205925-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clu
sterName:auto-20210813205925-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:13:22.068533  436296 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:13:22.068604  436296 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:13:22.107634  436296 cri.go:76] found id: ""
	I0813 21:13:22.107696  436296 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:13:22.115385  436296 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:13:22.127071  436296 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:13:22.136871  436296 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:13:22.136909  436296 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:13:23.099450  435569 cni.go:93] Creating CNI manager for ""
	I0813 21:13:23.099475  435569 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:13:18.904366  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:19.606326  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Found IP for machine: 192.168.83.14
	I0813 21:13:19.606379  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has current primary IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:19.606394  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Reserving static IP address...
	I0813 21:13:19.606409  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | unable to find host DHCP lease matching {name: "cilium-20210813205926-393438", mac: "52:54:00:98:f8:89", ip: "192.168.83.14"} in network mk-cilium-20210813205926-393438
	I0813 21:13:19.606894  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Reserved static IP address: 192.168.83.14
	I0813 21:13:19.606922  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Waiting for SSH to be available...
	I0813 21:13:19.606944  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Getting to WaitForSSH function...
	I0813 21:13:19.613459  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:19.613963  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:minikube Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:19.614000  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:19.614261  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Using SSH client type: external
	I0813 21:13:19.614298  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438/id_rsa (-rw-------)
	I0813 21:13:19.614334  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:13:19.614348  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | About to run SSH command:
	I0813 21:13:19.614360  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | exit 0
	I0813 21:13:19.715468  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | SSH cmd err, output: exit status 255: 
	I0813 21:13:19.715496  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0813 21:13:19.715508  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | command : exit 0
	I0813 21:13:19.715517  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | err     : exit status 255
	I0813 21:13:19.715528  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | output  : 
	I0813 21:13:22.715610  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Getting to WaitForSSH function...
	I0813 21:13:22.721513  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:22.721952  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:22.721983  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:22.722113  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Using SSH client type: external
	I0813 21:13:22.722136  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438/id_rsa (-rw-------)
	I0813 21:13:22.722202  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:13:22.722214  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | About to run SSH command:
	I0813 21:13:22.722226  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | exit 0
	I0813 21:13:22.855652  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 21:13:22.856446  436805 main.go:130] libmachine: (cilium-20210813205926-393438) KVM machine creation complete!
	I0813 21:13:22.856515  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetConfigRaw
	I0813 21:13:22.857224  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .DriverName
	I0813 21:13:22.857476  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .DriverName
	I0813 21:13:22.857681  436805 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 21:13:22.857715  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetState
	I0813 21:13:22.861373  436805 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 21:13:22.861394  436805 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 21:13:22.861404  436805 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 21:13:22.861420  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:13:22.867151  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:22.867517  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:22.867541  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:22.867779  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHPort
	I0813 21:13:22.867950  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:22.868082  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:22.868211  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:13:22.868424  436805 main.go:130] libmachine: Using SSH client type: native
	I0813 21:13:22.868653  436805 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.83.14 22 <nil> <nil>}
	I0813 21:13:22.868672  436805 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 21:13:23.006989  436805 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:13:23.007014  436805 main.go:130] libmachine: Detecting the provisioner...
	I0813 21:13:23.007033  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:13:23.014591  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.015514  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.015551  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.015773  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHPort
	I0813 21:13:23.016009  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:23.016259  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:23.016402  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:13:23.016622  436805 main.go:130] libmachine: Using SSH client type: native
	I0813 21:13:23.016862  436805 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.83.14 22 <nil> <nil>}
	I0813 21:13:23.016920  436805 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 21:13:23.153280  436805 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 21:13:23.153365  436805 main.go:130] libmachine: found compatible host: buildroot
	I0813 21:13:23.153383  436805 main.go:130] libmachine: Provisioning with buildroot...
	I0813 21:13:23.153395  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetMachineName
	I0813 21:13:23.153678  436805 buildroot.go:166] provisioning hostname "cilium-20210813205926-393438"
	I0813 21:13:23.153751  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetMachineName
	I0813 21:13:23.153971  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:13:23.160615  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.161059  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.161130  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.161364  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHPort
	I0813 21:13:23.161566  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:23.161746  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:23.161898  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:13:23.162114  436805 main.go:130] libmachine: Using SSH client type: native
	I0813 21:13:23.162325  436805 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.83.14 22 <nil> <nil>}
	I0813 21:13:23.162351  436805 main.go:130] libmachine: About to run SSH command:
	sudo hostname cilium-20210813205926-393438 && echo "cilium-20210813205926-393438" | sudo tee /etc/hostname
	I0813 21:13:23.321551  436805 main.go:130] libmachine: SSH cmd err, output: <nil>: cilium-20210813205926-393438
	
	I0813 21:13:23.321588  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:13:23.327951  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.328352  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.328382  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.328621  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHPort
	I0813 21:13:23.328841  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:23.329028  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:23.329195  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:13:23.329418  436805 main.go:130] libmachine: Using SSH client type: native
	I0813 21:13:23.329663  436805 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.83.14 22 <nil> <nil>}
	I0813 21:13:23.329692  436805 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20210813205926-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20210813205926-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20210813205926-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:13:23.486826  436805 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:13:23.486868  436805 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:13:23.486926  436805 buildroot.go:174] setting up certificates
	I0813 21:13:23.486944  436805 provision.go:83] configureAuth start
	I0813 21:13:23.486963  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetMachineName
	I0813 21:13:23.487290  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetIP
	I0813 21:13:23.493983  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.494388  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.494420  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.494571  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:13:23.499820  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.500227  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.500262  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.500430  436805 provision.go:138] copyHostCerts
	I0813 21:13:23.500507  436805 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:13:23.500519  436805 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:13:23.500575  436805 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:13:23.500668  436805 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:13:23.500684  436805 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:13:23.500708  436805 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:13:23.500785  436805 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:13:23.500795  436805 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:13:23.500818  436805 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 21:13:23.500866  436805 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.cilium-20210813205926-393438 san=[192.168.83.14 192.168.83.14 localhost 127.0.0.1 minikube cilium-20210813205926-393438]
	I0813 21:13:23.577152  436805 provision.go:172] copyRemoteCerts
	I0813 21:13:23.577227  436805 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:13:23.577262  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:13:23.583069  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.583440  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.583475  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.583653  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHPort
	I0813 21:13:23.583858  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:23.584026  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:13:23.584172  436805 sshutil.go:53] new ssh client: &{IP:192.168.83.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438/id_rsa Username:docker}
	I0813 21:13:23.686356  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:13:23.709082  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0813 21:13:23.731489  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 21:13:23.753068  436805 provision.go:86] duration metric: configureAuth took 266.105769ms
	I0813 21:13:23.753095  436805 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:13:23.753294  436805 config.go:177] Loaded profile config "cilium-20210813205926-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:13:23.753320  436805 main.go:130] libmachine: Checking connection to Docker...
	I0813 21:13:23.753343  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetURL
	I0813 21:13:23.756587  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | Using libvirt version 3000000
	I0813 21:13:23.762362  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.762727  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.762755  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.763018  436805 main.go:130] libmachine: Docker is up and running!
	I0813 21:13:23.763031  436805 main.go:130] libmachine: Reticulating splines...
	I0813 21:13:23.763039  436805 client.go:171] LocalClient.Create took 22.57519585s
	I0813 21:13:23.763058  436805 start.go:168] duration metric: libmachine.API.Create for "cilium-20210813205926-393438" took 22.575255466s
	I0813 21:13:23.763070  436805 start.go:267] post-start starting for "cilium-20210813205926-393438" (driver="kvm2")
	I0813 21:13:23.763077  436805 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:13:23.763094  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .DriverName
	I0813 21:13:23.763319  436805 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:13:23.763350  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:13:23.768269  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.768676  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.768705  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.768901  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHPort
	I0813 21:13:23.769090  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:23.769272  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:13:23.769447  436805 sshutil.go:53] new ssh client: &{IP:192.168.83.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438/id_rsa Username:docker}
	I0813 21:13:23.040480  436296 out.go:204]   - Generating certificates and keys ...
	I0813 21:13:23.003595  434502 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:13:23.003628  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:13:23.399910  434502 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:13:23.399943  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:13:23.499809  434502 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:13:23.499840  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:13:23.818184  434502 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:13:23.818211  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:13:23.897568  434502 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:13:23.897595  434502 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:13:23.973787  434502 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:13:24.138602  434502 pod_ready.go:102] pod "coredns-558bd4d5db-6pg65" in "kube-system" namespace has status "Ready":"False"
	I0813 21:13:25.060496  434502 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.967340592s)
	I0813 21:13:25.060536  434502 start.go:728] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS
	I0813 21:13:25.060560  434502 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.97379106s)
	I0813 21:13:25.060577  434502 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.910412326s)
	I0813 21:13:25.060593  434502 main.go:130] libmachine: Making call to close driver server
	I0813 21:13:25.060602  434502 main.go:130] libmachine: Making call to close driver server
	I0813 21:13:25.060609  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .Close
	I0813 21:13:25.060613  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .Close
	I0813 21:13:25.060993  434502 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:13:25.061031  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Closing plugin on server side
	I0813 21:13:25.061068  434502 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:13:25.061129  434502 main.go:130] libmachine: Making call to close driver server
	I0813 21:13:25.061160  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .Close
	I0813 21:13:25.062525  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Closing plugin on server side
	I0813 21:13:25.062544  434502 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:13:25.062561  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Closing plugin on server side
	I0813 21:13:25.062562  434502 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:13:25.062591  434502 main.go:130] libmachine: Making call to close driver server
	I0813 21:13:25.062604  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .Close
	I0813 21:13:25.062547  434502 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:13:25.062620  434502 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:13:25.062635  434502 main.go:130] libmachine: Making call to close driver server
	I0813 21:13:25.062648  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .Close
	I0813 21:13:25.062898  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Closing plugin on server side
	I0813 21:13:25.062905  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Closing plugin on server side
	I0813 21:13:25.062924  434502 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:13:25.062933  434502 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:13:25.064023  434502 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:13:25.064041  434502 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:13:25.429472  434502 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.036706478s)
	I0813 21:13:25.429527  434502 main.go:130] libmachine: Making call to close driver server
	I0813 21:13:25.429540  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .Close
	I0813 21:13:25.429907  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Closing plugin on server side
	I0813 21:13:25.429946  434502 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:13:25.429955  434502 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:13:25.429965  434502 main.go:130] libmachine: Making call to close driver server
	I0813 21:13:25.429979  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .Close
	I0813 21:13:25.430319  434502 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:13:25.430371  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Closing plugin on server side
	I0813 21:13:25.430399  434502 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:13:25.430414  434502 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210813210115-393438"
	I0813 21:13:26.175577  434502 pod_ready.go:102] pod "coredns-558bd4d5db-6pg65" in "kube-system" namespace has status "Ready":"False"
	I0813 21:13:26.370609  434502 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.396768443s)
	I0813 21:13:26.370662  434502 main.go:130] libmachine: Making call to close driver server
	I0813 21:13:26.370742  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .Close
	I0813 21:13:26.371157  434502 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:13:26.371177  434502 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:13:26.371193  434502 main.go:130] libmachine: Making call to close driver server
	I0813 21:13:26.371208  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) Calling .Close
	I0813 21:13:26.371263  434502 main.go:130] libmachine: (embed-certs-20210813210115-393438) DBG | Closing plugin on server side
	I0813 21:13:26.371500  434502 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:13:26.371517  434502 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:13:26.307454  436296 out.go:204]   - Booting up control plane ...
	I0813 21:13:23.101352  435569 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:13:23.101428  435569 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:13:23.142133  435569 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:13:23.165951  435569 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:13:23.166076  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:23.166165  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=newest-cni-20210813211202-393438 minikube.k8s.io/updated_at=2021_08_13T21_13_23_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:23.609953  435569 ops.go:34] apiserver oom_adj: -16
	I0813 21:13:23.610205  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:24.263804  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:24.764023  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:25.263138  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:25.764168  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:26.263919  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:26.763352  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:27.263340  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:26.374693  434502 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0813 21:13:26.374717  434502 addons.go:344] enableAddons completed in 4.704048991s
	I0813 21:13:27.669834  434502 pod_ready.go:97] error getting pod "coredns-558bd4d5db-6pg65" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-6pg65" not found
	I0813 21:13:27.669864  434502 pod_ready.go:81] duration metric: took 5.547681059s waiting for pod "coredns-558bd4d5db-6pg65" in "kube-system" namespace to be "Ready" ...
	E0813 21:13:27.669878  434502 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-6pg65" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-6pg65" not found
	I0813 21:13:27.669888  434502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-g7rvp" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:27.695063  434502 pod_ready.go:92] pod "coredns-558bd4d5db-g7rvp" in "kube-system" namespace has status "Ready":"True"
	I0813 21:13:27.695089  434502 pod_ready.go:81] duration metric: took 25.193107ms waiting for pod "coredns-558bd4d5db-g7rvp" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:27.695103  434502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:27.709176  434502 pod_ready.go:92] pod "etcd-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:13:27.709194  434502 pod_ready.go:81] duration metric: took 14.083239ms waiting for pod "etcd-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:27.709203  434502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:27.723703  434502 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:13:27.723730  434502 pod_ready.go:81] duration metric: took 14.51935ms waiting for pod "kube-apiserver-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:27.723744  434502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:27.732910  434502 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:13:27.732928  434502 pod_ready.go:81] duration metric: took 9.175462ms waiting for pod "kube-controller-manager-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:27.732941  434502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lg5kj" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:23.865192  436805 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:13:23.871838  436805 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:13:23.871864  436805 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:13:23.871925  436805 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:13:23.872065  436805 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 21:13:23.872234  436805 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:13:23.879659  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:13:23.897771  436805 start.go:270] post-start completed in 134.676542ms
	I0813 21:13:23.897825  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetConfigRaw
	I0813 21:13:23.899003  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetIP
	I0813 21:13:23.906499  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.906939  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.906970  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.907228  436805 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/config.json ...
	I0813 21:13:23.907417  436805 start.go:129] duration metric: createHost completed in 22.739324876s
	I0813 21:13:23.907434  436805 start.go:80] releasing machines lock for "cilium-20210813205926-393438", held for 22.739563751s
	I0813 21:13:23.907472  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .DriverName
	I0813 21:13:23.907660  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetIP
	I0813 21:13:23.913026  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.913419  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.913449  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.913570  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .DriverName
	I0813 21:13:23.913748  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .DriverName
	I0813 21:13:23.914190  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .DriverName
	I0813 21:13:23.914484  436805 ssh_runner.go:149] Run: systemctl --version
	I0813 21:13:23.914517  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:13:23.914547  436805 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:13:23.914588  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:13:23.920951  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.921078  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.921372  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.921399  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.921433  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:23.921457  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:23.921675  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHPort
	I0813 21:13:23.921765  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHPort
	I0813 21:13:23.921866  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:23.921943  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:13:23.922033  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:13:23.922109  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:13:23.922189  436805 sshutil.go:53] new ssh client: &{IP:192.168.83.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438/id_rsa Username:docker}
	I0813 21:13:23.922301  436805 sshutil.go:53] new ssh client: &{IP:192.168.83.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/cilium-20210813205926-393438/id_rsa Username:docker}
	I0813 21:13:24.053740  436805 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:13:24.053876  436805 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:13:28.072994  436805 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.019088751s)
	I0813 21:13:28.073137  436805 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 21:13:28.073208  436805 ssh_runner.go:149] Run: which lz4
	I0813 21:13:28.078012  436805 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:13:28.082983  436805 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:13:28.083014  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0813 21:13:27.833597  434502 pod_ready.go:92] pod "kube-proxy-lg5kj" in "kube-system" namespace has status "Ready":"True"
	I0813 21:13:27.835141  434502 pod_ready.go:81] duration metric: took 102.180298ms waiting for pod "kube-proxy-lg5kj" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:27.835163  434502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:28.233759  434502 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813210115-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:13:28.233782  434502 pod_ready.go:81] duration metric: took 398.610142ms waiting for pod "kube-scheduler-embed-certs-20210813210115-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:13:28.233793  434502 pod_ready.go:38] duration metric: took 6.126511151s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:13:28.233816  434502 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:13:28.233867  434502 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:13:28.263564  434502 api_server.go:70] duration metric: took 6.592922025s to wait for apiserver process to appear ...
	I0813 21:13:28.263592  434502 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:13:28.263604  434502 api_server.go:239] Checking apiserver healthz at https://192.168.72.95:8443/healthz ...
	I0813 21:13:28.272447  434502 api_server.go:265] https://192.168.72.95:8443/healthz returned 200:
	ok
	I0813 21:13:28.273685  434502 api_server.go:139] control plane version: v1.21.3
	I0813 21:13:28.273708  434502 api_server.go:129] duration metric: took 10.109002ms to wait for apiserver health ...
	I0813 21:13:28.273719  434502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:13:28.440921  434502 system_pods.go:59] 8 kube-system pods found
	I0813 21:13:28.440956  434502 system_pods.go:61] "coredns-558bd4d5db-g7rvp" [31de3d17-fc0d-487b-9b5e-29a850447c11] Running
	I0813 21:13:28.440963  434502 system_pods.go:61] "etcd-embed-certs-20210813210115-393438" [b44dd392-3295-450f-84e9-528267a0e37d] Running
	I0813 21:13:28.440970  434502 system_pods.go:61] "kube-apiserver-embed-certs-20210813210115-393438" [0d5084e0-1afc-419b-9c07-dbd4891d1a8d] Running
	I0813 21:13:28.440976  434502 system_pods.go:61] "kube-controller-manager-embed-certs-20210813210115-393438" [429dbde3-d0df-42a7-90ae-ae18fdb75553] Running
	I0813 21:13:28.440982  434502 system_pods.go:61] "kube-proxy-lg5kj" [bcceb77f-7a57-4461-a36d-bc56cd609030] Running
	I0813 21:13:28.440989  434502 system_pods.go:61] "kube-scheduler-embed-certs-20210813210115-393438" [c408a0ab-d59e-4475-bdf6-473396aa648c] Running
	I0813 21:13:28.441012  434502 system_pods.go:61] "metrics-server-7c784ccb57-2bkk5" [fc0c5961-f1c7-4e5b-8c73-ec11bcd71140] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:13:28.441026  434502 system_pods.go:61] "storage-provisioner" [fee29ac6-9f54-43d1-a6af-19999cf2f219] Running
	I0813 21:13:28.441038  434502 system_pods.go:74] duration metric: took 167.311696ms to wait for pod list to return data ...
	I0813 21:13:28.441053  434502 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:13:28.635853  434502 default_sa.go:45] found service account: "default"
	I0813 21:13:28.635879  434502 default_sa.go:55] duration metric: took 194.817982ms for default service account to be created ...
	I0813 21:13:28.635889  434502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:13:28.848029  434502 system_pods.go:86] 8 kube-system pods found
	I0813 21:13:28.848062  434502 system_pods.go:89] "coredns-558bd4d5db-g7rvp" [31de3d17-fc0d-487b-9b5e-29a850447c11] Running
	I0813 21:13:28.848073  434502 system_pods.go:89] "etcd-embed-certs-20210813210115-393438" [b44dd392-3295-450f-84e9-528267a0e37d] Running
	I0813 21:13:28.848079  434502 system_pods.go:89] "kube-apiserver-embed-certs-20210813210115-393438" [0d5084e0-1afc-419b-9c07-dbd4891d1a8d] Running
	I0813 21:13:28.848084  434502 system_pods.go:89] "kube-controller-manager-embed-certs-20210813210115-393438" [429dbde3-d0df-42a7-90ae-ae18fdb75553] Running
	I0813 21:13:28.848089  434502 system_pods.go:89] "kube-proxy-lg5kj" [bcceb77f-7a57-4461-a36d-bc56cd609030] Running
	I0813 21:13:28.848101  434502 system_pods.go:89] "kube-scheduler-embed-certs-20210813210115-393438" [c408a0ab-d59e-4475-bdf6-473396aa648c] Running
	I0813 21:13:28.848121  434502 system_pods.go:89] "metrics-server-7c784ccb57-2bkk5" [fc0c5961-f1c7-4e5b-8c73-ec11bcd71140] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:13:28.848134  434502 system_pods.go:89] "storage-provisioner" [fee29ac6-9f54-43d1-a6af-19999cf2f219] Running
	I0813 21:13:28.848144  434502 system_pods.go:126] duration metric: took 212.248229ms to wait for k8s-apps to be running ...
	I0813 21:13:28.848161  434502 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:13:28.848218  434502 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:13:28.889851  434502 system_svc.go:56] duration metric: took 41.682034ms WaitForService to wait for kubelet.
	I0813 21:13:28.889885  434502 kubeadm.go:547] duration metric: took 7.21925103s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:13:28.889918  434502 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:13:29.036800  434502 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:13:29.036838  434502 node_conditions.go:123] node cpu capacity is 2
	I0813 21:13:29.036856  434502 node_conditions.go:105] duration metric: took 146.931949ms to run NodePressure ...
	I0813 21:13:29.036869  434502 start.go:231] waiting for startup goroutines ...
	I0813 21:13:29.102492  434502 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 21:13:29.104737  434502 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210813210115-393438" cluster and "default" namespace by default
	I0813 21:13:27.763459  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:28.263896  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:28.764060  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:29.263914  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:29.763761  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:30.263219  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:30.763756  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:31.263670  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:31.763630  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:32.263455  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:32.123278  436805 containerd.go:546] Took 4.045299 seconds to copy over tarball
	I0813 21:13:32.123380  436805 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:13:32.763538  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:33.263903  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:33.763493  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:37.910364  435569 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.146815999s)
	I0813 21:13:38.263993  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:39.376634  435569 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.11260079s)
	I0813 21:13:39.763906  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:40.263948  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:40.763931  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:41.263934  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:41.332799  436805 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.209384932s)
	I0813 21:13:41.332841  436805 containerd.go:553] Took 9.209524 seconds t extract the tarball
	I0813 21:13:41.332857  436805 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:13:41.414260  436805 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:13:41.605177  436805 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:13:41.659577  436805 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 21:13:42.445348  436805 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 21:13:42.458744  436805 docker.go:153] disabling docker service ...
	I0813 21:13:42.458825  436805 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:13:42.470197  436805 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:13:42.480846  436805 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:13:42.605470  436805 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:13:42.737887  436805 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:13:42.750114  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:13:42.763755  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 21:13:42.778740  436805 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:13:42.786256  436805 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:13:42.786318  436805 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:13:42.805451  436805 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:13:42.811888  436805 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:13:42.930051  436805 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:13:43.672559  436805 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 21:13:43.672626  436805 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:13:43.679892  436805 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 21:13:44.784810  436805 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:13:44.793026  436805 start.go:413] Will wait 60s for crictl version
	I0813 21:13:44.793106  436805 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:13:44.844286  436805 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 21:13:44.844364  436805 ssh_runner.go:149] Run: containerd --version
	I0813 21:13:44.887097  436805 ssh_runner.go:149] Run: containerd --version
	I0813 21:13:43.670688  435569 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.406691891s)
	I0813 21:13:43.764000  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:44.263933  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:44.763912  435569 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:45.001949  435569 kubeadm.go:985] duration metric: took 21.835911735s to wait for elevateKubeSystemPrivileges.
	I0813 21:13:45.001989  435569 kubeadm.go:392] StartCluster complete in 1m2.635937899s
	I0813 21:13:45.002011  435569 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:45.002132  435569 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:13:45.004562  435569 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:45.561196  435569 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813211202-393438" rescaled to 1
	I0813 21:13:45.561267  435569 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:13:45.562801  435569 out.go:177] * Verifying Kubernetes components...
	I0813 21:13:45.561490  435569 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:13:45.562867  435569 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:13:45.561510  435569 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 21:13:45.561696  435569 config.go:177] Loaded profile config "newest-cni-20210813211202-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:13:45.562962  435569 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813211202-393438"
	I0813 21:13:45.562981  435569 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813211202-393438"
	W0813 21:13:45.562988  435569 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:13:45.562997  435569 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813211202-393438"
	I0813 21:13:45.563014  435569 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813211202-393438"
	I0813 21:13:45.563032  435569 host.go:66] Checking if "newest-cni-20210813211202-393438" exists ...
	I0813 21:13:45.563499  435569 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:45.563534  435569 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:45.563541  435569 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:45.563571  435569 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:45.583418  435569 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36911
	I0813 21:13:45.583879  435569 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:45.584399  435569 main.go:130] libmachine: Using API Version  1
	I0813 21:13:45.584428  435569 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:45.584816  435569 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:45.585018  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetState
	I0813 21:13:45.588295  435569 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41553
	I0813 21:13:45.588691  435569 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:45.589179  435569 main.go:130] libmachine: Using API Version  1
	I0813 21:13:45.589196  435569 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:45.589583  435569 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:45.590239  435569 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:45.590286  435569 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:45.598732  435569 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813211202-393438"
	W0813 21:13:45.598752  435569 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:13:45.598782  435569 host.go:66] Checking if "newest-cni-20210813211202-393438" exists ...
	I0813 21:13:45.599173  435569 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:45.599210  435569 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:45.612571  435569 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37595
	I0813 21:13:45.613295  435569 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:45.613577  435569 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34817
	I0813 21:13:45.614009  435569 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:45.614198  435569 main.go:130] libmachine: Using API Version  1
	I0813 21:13:45.614212  435569 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:45.614485  435569 main.go:130] libmachine: Using API Version  1
	I0813 21:13:45.614502  435569 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:45.614929  435569 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:45.615036  435569 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:45.615097  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetState
	I0813 21:13:45.615854  435569 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:45.615900  435569 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:45.620420  435569 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:13:44.929479  436805 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 21:13:44.929522  436805 main.go:130] libmachine: (cilium-20210813205926-393438) Calling .GetIP
	I0813 21:13:44.936593  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:44.937054  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f8:89", ip: ""} in network mk-cilium-20210813205926-393438: {Iface:virbr5 ExpiryTime:2021-08-13 22:13:18 +0000 UTC Type:0 Mac:52:54:00:98:f8:89 Iaid: IPaddr:192.168.83.14 Prefix:24 Hostname:cilium-20210813205926-393438 Clientid:01:52:54:00:98:f8:89}
	I0813 21:13:44.937100  436805 main.go:130] libmachine: (cilium-20210813205926-393438) DBG | domain cilium-20210813205926-393438 has defined IP address 192.168.83.14 and MAC address 52:54:00:98:f8:89 in network mk-cilium-20210813205926-393438
	I0813 21:13:44.937314  436805 ssh_runner.go:149] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0813 21:13:44.942234  436805 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:13:44.954335  436805 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:13:44.954388  436805 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:13:44.998553  436805 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:13:44.998582  436805 containerd.go:517] Images already preloaded, skipping extraction
	I0813 21:13:44.998650  436805 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:13:45.039519  436805 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:13:45.039543  436805 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:13:45.039609  436805 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:13:45.083198  436805 cni.go:93] Creating CNI manager for "cilium"
	I0813 21:13:45.083230  436805 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:13:45.083243  436805 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.14 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20210813205926-393438 NodeName:cilium-20210813205926-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.83.14 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:13:45.083379  436805 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "cilium-20210813205926-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:13:45.083508  436805 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=cilium-20210813205926-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.14 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813205926-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0813 21:13:45.083579  436805 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 21:13:45.094333  436805 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:13:45.094404  436805 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:13:45.103049  436805 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (543 bytes)
	I0813 21:13:45.118320  436805 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:13:45.132345  436805 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I0813 21:13:45.148421  436805 ssh_runner.go:149] Run: grep 192.168.83.14	control-plane.minikube.internal$ /etc/hosts
	I0813 21:13:45.153025  436805 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:13:45.166762  436805 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438 for IP: 192.168.83.14
	I0813 21:13:45.166834  436805 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:13:45.166864  436805 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:13:45.166931  436805 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/client.key
	I0813 21:13:45.166947  436805 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/client.crt with IP's: []
	I0813 21:13:45.292452  436805 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/client.crt ...
	I0813 21:13:45.292492  436805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/client.crt: {Name:mk15b4a130e6afc7c1757b0aa96b0f4d778721bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:45.292751  436805 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/client.key ...
	I0813 21:13:45.292776  436805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/client.key: {Name:mk07f3cacbd4533e4355a39198f3c00abb91b9e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:45.292934  436805 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.key.af48f6dd
	I0813 21:13:45.292949  436805 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.crt.af48f6dd with IP's: [192.168.83.14 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 21:13:45.381307  436805 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.crt.af48f6dd ...
	I0813 21:13:45.381347  436805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.crt.af48f6dd: {Name:mk7f5d35bd0ae79517f8fd4d19e4b813b0c1132c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:45.381554  436805 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.key.af48f6dd ...
	I0813 21:13:45.381573  436805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.key.af48f6dd: {Name:mk0f6c9121b159b7c8a5fa99b57723b2365f310e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:45.381691  436805 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.crt.af48f6dd -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.crt
	I0813 21:13:45.381781  436805 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.key.af48f6dd -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.key
	I0813 21:13:45.381849  436805 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/proxy-client.key
	I0813 21:13:45.381863  436805 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/proxy-client.crt with IP's: []
	I0813 21:13:45.745129  436805 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/proxy-client.crt ...
	I0813 21:13:45.745167  436805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/proxy-client.crt: {Name:mke7bf690fe3b6158d19e7c05e5aaf38813d4e07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:45.745401  436805 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/proxy-client.key ...
	I0813 21:13:45.745421  436805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/proxy-client.key: {Name:mk0b7c1a147961ec913e679787255a87690ba707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:13:45.745675  436805 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:13:45.745735  436805 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:13:45.745749  436805 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:13:45.745788  436805 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:13:45.745829  436805 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:13:45.745868  436805 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:13:45.745935  436805 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:13:45.747047  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:13:45.770355  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 21:13:45.793655  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:13:45.817946  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/cilium-20210813205926-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:13:45.840788  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:13:45.865072  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:13:45.888675  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:13:45.907769  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:13:45.930057  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:13:45.950024  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:13:45.970064  436805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:13:45.988132  436805 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:13:46.000572  436805 ssh_runner.go:149] Run: openssl version
	I0813 21:13:46.006812  436805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:13:46.016990  436805 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:13:46.021766  436805 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:13:46.021817  436805 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:13:46.027886  436805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:13:46.038739  436805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:13:46.047039  436805 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:13:46.051623  436805 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:13:46.051675  436805 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:13:46.057563  436805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:13:46.065732  436805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:13:46.073603  436805 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:13:46.078430  436805 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:13:46.078478  436805 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:13:46.084273  436805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:13:46.094926  436805 kubeadm.go:390] StartCluster: {Name:cilium-20210813205926-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 C
lusterName:cilium-20210813205926-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.14 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:13:46.095020  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:13:46.095063  436805 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:13:46.132862  436805 cri.go:76] found id: ""
	I0813 21:13:46.132935  436805 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:13:46.140927  436805 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:13:46.147915  436805 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:13:46.155032  436805 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:13:46.155069  436805 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	fee4d3f11b1ed       523cad1a4df73       6 seconds ago       Exited              dashboard-metrics-scraper   1                   2757524da210c
	34b7bed0566a4       523cad1a4df73       8 seconds ago       Exited              dashboard-metrics-scraper   0                   2757524da210c
	feba03c02cdd6       9a07b5b4bfac0       20 seconds ago      Running             kubernetes-dashboard        0                   6c02d5e3e4d66
	f0d1f74ac1b6b       6e38f40d628db       20 seconds ago      Running             storage-provisioner         0                   1a65131b1565e
	fce52cb2f6260       296a6d5035e2d       24 seconds ago      Running             coredns                     0                   bf382a6a87c89
	f982e62ab4f99       adb2816ea823a       25 seconds ago      Running             kube-proxy                  0                   386546988b9ed
	875436dc90a14       bc2bb319a7038       55 seconds ago      Running             kube-controller-manager     0                   c8063b597e446
	699b039e9f9b1       6be0dc1302e30       55 seconds ago      Running             kube-scheduler              0                   11f1b554ea9b0
	6b45b9874af45       0369cf4303ffd       55 seconds ago      Running             etcd                        0                   3096fb97d92c5
	7eeee683347cf       3d174f00aa39e       55 seconds ago      Running             kube-apiserver              0                   2bed3213c62a1
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 21:05:42 UTC, end at Fri 2021-08-13 21:13:48 UTC. --
	Aug 13 21:13:31 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:31.027192112Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:13:39 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:39.630079763Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:13:39 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:39.638012943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:13:39 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:39.638768345Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 13 21:13:39 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:39.643940246Z" level=info msg="CreateContainer within sandbox \"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 13 21:13:39 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:39.717882878Z" level=info msg="CreateContainer within sandbox \"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a\""
	Aug 13 21:13:39 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:39.726898624Z" level=info msg="StartContainer for \"34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a\""
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.339967870Z" level=info msg="StartContainer for \"34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a\" returns successfully"
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.406667324Z" level=info msg="Finish piping stderr of container \"34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a\""
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.406910822Z" level=info msg="Finish piping stdout of container \"34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a\""
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.409336125Z" level=info msg="TaskExit event &TaskExit{ContainerID:34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a,ID:34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a,Pid:7018,ExitStatus:1,ExitedAt:2021-08-13 21:13:40.408125691 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.495193651Z" level=info msg="shim disconnected" id=34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.495838003Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.937053896Z" level=info msg="CreateContainer within sandbox \"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 13 21:13:41 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:41.362284757Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:13:43 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:43.539891101Z" level=info msg="CreateContainer within sandbox \"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051\""
	Aug 13 21:13:43 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:43.542172617Z" level=info msg="StartContainer for \"fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051\""
	Aug 13 21:13:43 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:43.637049269Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 13 21:13:43 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:43.670044750Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.246582040Z" level=info msg="StartContainer for \"fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051\" returns successfully"
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.284334110Z" level=info msg="TaskExit event &TaskExit{ContainerID:fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051,ID:fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051,Pid:7088,ExitStatus:1,ExitedAt:2021-08-13 21:13:44.283092537 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.285287363Z" level=info msg="Finish piping stderr of container \"fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051\""
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.285866466Z" level=info msg="Finish piping stdout of container \"fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051\""
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.362143446Z" level=info msg="shim disconnected" id=fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.362281103Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	
	* 
	* ==> coredns [fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	*               on the kernel command line
	[  +0.000022] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +5.163103] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.038045] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.934417] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1719 comm=systemd-network
	[  +0.887909] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.291177] vboxguest: loading out-of-tree module taints kernel.
	[  +0.011560] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 21:06] systemd-fstab-generator[2073]: Ignoring "noauto" for root device
	[  +0.299806] systemd-fstab-generator[2103]: Ignoring "noauto" for root device
	[  +0.147130] systemd-fstab-generator[2118]: Ignoring "noauto" for root device
	[  +0.260000] systemd-fstab-generator[2149]: Ignoring "noauto" for root device
	[  +6.278816] systemd-fstab-generator[2341]: Ignoring "noauto" for root device
	[Aug13 21:07] NFSD: Unable to end grace period: -110
	[ +10.458309] kauditd_printk_skb: 38 callbacks suppressed
	[Aug13 21:08] kauditd_printk_skb: 95 callbacks suppressed
	[Aug13 21:12] systemd-fstab-generator[5466]: Ignoring "noauto" for root device
	[Aug13 21:13] systemd-fstab-generator[5883]: Ignoring "noauto" for root device
	[ +20.617504] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.000864] kauditd_printk_skb: 68 callbacks suppressed
	[ +14.178877] kauditd_printk_skb: 8 callbacks suppressed
	[  +3.334235] systemd-fstab-generator[7138]: Ignoring "noauto" for root device
	[  +0.822151] systemd-fstab-generator[7191]: Ignoring "noauto" for root device
	[  +1.159607] systemd-fstab-generator[7244]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c] <==
	* 2021-08-13 21:13:19.420432 W | etcdserver: request "header:<ID:10617934592130892125 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-20210813210115-393438\" mod_revision:326 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-20210813210115-393438\" value_size:6637 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-20210813210115-393438\" > >>" with result "size:16" took too long (592.01384ms) to execute
	2021-08-13 21:13:19.605699 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.001917504s) to execute
	2021-08-13 21:13:19.606103 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-20210813210115-393438\" " with result "range_response_count:1 size:6735" took too long (173.152403ms) to execute
	2021-08-13 21:13:19.606316 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/service-controller\" " with result "range_response_count:0 size:5" took too long (165.873818ms) to execute
	2021-08-13 21:13:19.606618 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.155473371s) to execute
	2021-08-13 21:13:19.609661 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20210813210115-393438.169afa17a5ded5a6\" " with result "range_response_count:1 size:895" took too long (1.170325521s) to execute
	2021-08-13 21:13:27.958921 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:13:36.726352 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000322354s) to execute
	WARNING: 2021/08/13 21:13:36 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-13 21:13:37.309446 W | wal: sync duration of 3.314706773s, expected less than 1s
	2021-08-13 21:13:37.900166 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:7838" took too long (3.417001855s) to execute
	2021-08-13 21:13:37.900890 W | etcdserver: request "header:<ID:10617934592130892712 > lease_revoke:<id:135a7b415c4b419e>" with result "size:28" took too long (590.17155ms) to execute
	2021-08-13 21:13:37.901082 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1131" took too long (1.887400452s) to execute
	2021-08-13 21:13:37.901444 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20210813210115-393438.169afa17a5ded5a6\" " with result "range_response_count:1 size:895" took too long (730.346919ms) to execute
	2021-08-13 21:13:37.902109 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.174335828s) to execute
	2021-08-13 21:13:38.961813 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	2021-08-13 21:13:39.340358 W | wal: sync duration of 1.390341525s, expected less than 1s
	2021-08-13 21:13:39.364055 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:7838" took too long (415.65776ms) to execute
	2021-08-13 21:13:39.381863 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (660.579855ms) to execute
	2021-08-13 21:13:43.538667 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.818997612s) to execute
	2021-08-13 21:13:43.539311 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.221487782s) to execute
	2021-08-13 21:13:43.542827 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:8195" took too long (1.589978556s) to execute
	2021-08-13 21:13:43.542999 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1131" took too long (1.530664236s) to execute
	2021-08-13 21:13:43.543165 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-7c784ccb57-2bkk5\" " with result "range_response_count:1 size:4409" took too long (2.179910158s) to execute
	2021-08-13 21:13:43.543361 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-7c784ccb57-2bkk5.169afa1a72cf0e7f\" " with result "range_response_count:1 size:862" took too long (2.175445564s) to execute
	
	* 
	* ==> kernel <==
	*  21:13:58 up 8 min,  0 users,  load average: 1.90, 1.02, 0.47
	Linux embed-certs-20210813210115-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69] <==
	* Trace[244210469]: ---"Object stored in database" 928ms (21:13:00.386)
	Trace[244210469]: [933.629175ms] [933.629175ms] END
	I0813 21:13:39.392667       1 trace.go:205] Trace[222088877]: "GuaranteedUpdate etcd3" type:*coordination.Lease (13-Aug-2021 21:13:38.660) (total time: 731ms):
	Trace[222088877]: ---"Transaction committed" 731ms (21:13:00.392)
	Trace[222088877]: [731.939748ms] [731.939748ms] END
	I0813 21:13:39.392796       1 trace.go:205] Trace[1904221178]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-20210813210115-393438,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.72.95,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 21:13:38.660) (total time: 732ms):
	Trace[1904221178]: ---"Object stored in database" 732ms (21:13:00.392)
	Trace[1904221178]: [732.581644ms] [732.581644ms] END
	I0813 21:13:43.547227       1 trace.go:205] Trace[1624093536]: "List etcd3" key:/pods/kubernetes-dashboard,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 21:13:41.947) (total time: 1599ms):
	Trace[1624093536]: [1.599810701s] [1.599810701s] END
	I0813 21:13:43.547846       1 trace.go:205] Trace[31110861]: "List" url:/api/v1/namespaces/kubernetes-dashboard/pods,user-agent:e2e-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.72.1,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 21:13:41.947) (total time: 1600ms):
	Trace[31110861]: ---"Listing from storage done" 1599ms (21:13:00.547)
	Trace[31110861]: [1.600479322s] [1.600479322s] END
	I0813 21:13:43.548136       1 trace.go:205] Trace[1408934217]: "Get" url:/api/v1/namespaces/kube-system/pods/metrics-server-7c784ccb57-2bkk5,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.72.95,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 21:13:41.358) (total time: 2190ms):
	Trace[1408934217]: ---"About to write a response" 2186ms (21:13:00.544)
	Trace[1408934217]: [2.190096646s] [2.190096646s] END
	I0813 21:13:43.550204       1 trace.go:205] Trace[1102788854]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.72.95,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 21:13:42.007) (total time: 1542ms):
	Trace[1102788854]: ---"About to write a response" 1542ms (21:13:00.550)
	Trace[1102788854]: [1.542934792s] [1.542934792s] END
	I0813 21:13:43.553301       1 trace.go:205] Trace[612282577]: "GuaranteedUpdate etcd3" type:*core.Event (13-Aug-2021 21:13:41.362) (total time: 2190ms):
	Trace[612282577]: ---"initial value restored" 2183ms (21:13:00.545)
	Trace[612282577]: [2.190558859s] [2.190558859s] END
	I0813 21:13:43.553454       1 trace.go:205] Trace[1330177402]: "Patch" url:/api/v1/namespaces/kube-system/events/metrics-server-7c784ccb57-2bkk5.169afa1a72cf0e7f,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.72.95,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 21:13:41.362) (total time: 2191ms):
	Trace[1330177402]: ---"About to apply patch" 2183ms (21:13:00.545)
	Trace[1330177402]: [2.191102717s] [2.191102717s] END
	
	* 
	* ==> kube-controller-manager [875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18] <==
	* I0813 21:13:25.180182       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-2bkk5"
	I0813 21:13:25.410348       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 21:13:25.477057       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:13:25.488685       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 21:13:25.502904       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.589182       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.591738       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.592039       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.655451       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.655848       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.664892       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:13:25.760237       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.766097       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.773912       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.774119       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.850284       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.851244       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:13:25.859159       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.859419       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:13:25.899393       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.899624       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.900010       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.900141       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:13:25.986617       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-vfmn7"
	I0813 21:13:26.000386       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-67f8b"
	
	* 
	* ==> kube-proxy [f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b] <==
	* I0813 21:13:23.266077       1 node.go:172] Successfully retrieved node IP: 192.168.72.95
	I0813 21:13:23.266145       1 server_others.go:140] Detected node IP 192.168.72.95
	W0813 21:13:23.266182       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 21:13:23.825651       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:13:23.825768       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:13:23.825785       1 server_others.go:212] Using iptables Proxier.
	I0813 21:13:23.836104       1 server.go:643] Version: v1.21.3
	I0813 21:13:23.837231       1 config.go:315] Starting service config controller
	I0813 21:13:23.837405       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0813 21:13:23.870324       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 21:13:23.873987       1 config.go:224] Starting endpoint slice config controller
	I0813 21:13:23.873999       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 21:13:23.874006       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	W0813 21:13:23.883128       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 21:13:23.938593       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef] <==
	* E0813 21:12:58.219794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:58.220283       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:12:58.220692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:12:58.221083       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:12:58.221588       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:12:58.222033       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:58.222086       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:12:58.222145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:58.222192       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:12:58.225870       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:12:58.226457       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:12:58.227197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:12:59.145974       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:12:59.220852       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:12:59.230721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:12:59.315601       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:59.371690       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:12:59.503452       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:12:59.525707       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:12:59.546836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:12:59.565060       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:59.576665       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:59.577912       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:12:59.674811       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0813 21:13:02.706776       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:05:42 UTC, end at Fri 2021-08-13 21:13:58 UTC. --
	Aug 13 21:13:25 embed-certs-20210813210115-393438 kubelet[5892]: W0813 21:13:25.212962    5892 container.go:586] Failed to update stats for container "/kubepods/burstable/podfc0c5961-f1c7-4e5b-8c73-ec11bcd71140": /sys/fs/cgroup/cpuset/kubepods/burstable/podfc0c5961-f1c7-4e5b-8c73-ec11bcd71140/cpuset.cpus found to be empty, continuing to push stats
	Aug 13 21:13:25 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:25.293677    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt4tg\" (UniqueName: \"kubernetes.io/projected/fc0c5961-f1c7-4e5b-8c73-ec11bcd71140-kube-api-access-tt4tg\") pod \"metrics-server-7c784ccb57-2bkk5\" (UID: \"fc0c5961-f1c7-4e5b-8c73-ec11bcd71140\") "
	Aug 13 21:13:25 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:25.293857    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fc0c5961-f1c7-4e5b-8c73-ec11bcd71140-tmp-dir\") pod \"metrics-server-7c784ccb57-2bkk5\" (UID: \"fc0c5961-f1c7-4e5b-8c73-ec11bcd71140\") "
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.025677    5892 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.047407    5892 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.100802    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfpk\" (UniqueName: \"kubernetes.io/projected/dec6e08d-aa7f-4511-ae76-036fb08eb5f0-kube-api-access-qhfpk\") pod \"dashboard-metrics-scraper-8685c45546-67f8b\" (UID: \"dec6e08d-aa7f-4511-ae76-036fb08eb5f0\") "
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.101169    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/dec6e08d-aa7f-4511-ae76-036fb08eb5f0-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-67f8b\" (UID: \"dec6e08d-aa7f-4511-ae76-036fb08eb5f0\") "
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.101200    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/92209727-a9d1-4943-a8c8-f0d00da0b005-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-vfmn7\" (UID: \"92209727-a9d1-4943-a8c8-f0d00da0b005\") "
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.101222    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdlk2\" (UniqueName: \"kubernetes.io/projected/92209727-a9d1-4943-a8c8-f0d00da0b005-kube-api-access-vdlk2\") pod \"kubernetes-dashboard-6fcdf4f6d-vfmn7\" (UID: \"92209727-a9d1-4943-a8c8-f0d00da0b005\") "
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:26.765169    5892 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:26.765207    5892 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:26.765350    5892 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tt4tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-2bkk5_kube-system(fc0c5961-f1c7-4e5b-8c73-ec11bcd71140): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:26.765445    5892 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-2bkk5" podUID=fc0c5961-f1c7-4e5b-8c73-ec11bcd71140
	Aug 13 21:13:27 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:27.610991    5892 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-2bkk5" podUID=fc0c5961-f1c7-4e5b-8c73-ec11bcd71140
	Aug 13 21:13:28 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:28.007995    5892 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/podfc0c5961-f1c7-4e5b-8c73-ec11bcd71140\": RecentStats: unable to find data in memory cache]"
	Aug 13 21:13:38 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:38.130114    5892 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/podfc0c5961-f1c7-4e5b-8c73-ec11bcd71140\": RecentStats: unable to find data in memory cache]"
	Aug 13 21:13:40 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:40.882264    5892 scope.go:111] "RemoveContainer" containerID="34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a"
	Aug 13 21:13:43 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:43.672777    5892 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:13:43 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:43.672932    5892 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:13:43 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:43.673093    5892 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tt4tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-2bkk5_kube-system(fc0c5961-f1c7-4e5b-8c73-ec11bcd71140): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:13:43 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:43.673153    5892 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-2bkk5" podUID=fc0c5961-f1c7-4e5b-8c73-ec11bcd71140
	Aug 13 21:13:44 embed-certs-20210813210115-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:13:44 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:44.642848    5892 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 21:13:44 embed-certs-20210813210115-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:13:44 embed-certs-20210813210115-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6] <==
	* 2021/08/13 21:13:28 Starting overwatch
	2021/08/13 21:13:28 Using namespace: kubernetes-dashboard
	2021/08/13 21:13:28 Using in-cluster config to connect to apiserver
	2021/08/13 21:13:28 Using secret token for csrf signing
	2021/08/13 21:13:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:13:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:13:28 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 21:13:28 Generating JWE encryption key
	2021/08/13 21:13:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:13:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:13:28 Initializing JWE encryption key from synchronized object
	2021/08/13 21:13:28 Creating in-cluster Sidecar client
	2021/08/13 21:13:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:13:28 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c] <==
	* 	/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00007a300, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000454780, 0x18e5530, 0xc00048a2c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0003525e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0003525e0, 0x18b3d60, 0xc0003540f0, 0x1, 0xc00007e5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003525e0, 0x3b9aca00, 0x0, 0x1, 0xc00007e5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0003525e0, 0x3b9aca00, 0xc00007e5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 109 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0xc0003fac80, 0xc00003c280)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x3b7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 21:13:58.248344  437394 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813210115-393438 -n embed-certs-20210813210115-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813210115-393438 -n embed-certs-20210813210115-393438: exit status 2 (309.241056ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210813210115-393438 logs -n 25
E0813 21:13:59.349010  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p embed-certs-20210813210115-393438 logs -n 25: exit status 110 (11.234845s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                                        | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | no-preload-20210813210044-393438                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:03:30 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:05:02 UTC |
	|         | embed-certs-20210813210115-393438                          |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:11:42 UTC |
	|         | no-preload-20210813210044-393438                           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:52 UTC | Fri, 13 Aug 2021 21:11:53 UTC |
	|         | no-preload-20210813210044-393438                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                           | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:56 UTC | Fri, 13 Aug 2021 21:11:57 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:51 UTC | Fri, 13 Aug 2021 21:11:59 UTC |
	|         | default-k8s-different-port-20210813210121-393438           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                  |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2                        |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| -p      | no-preload-20210813210044-393438                           | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:58 UTC | Fri, 13 Aug 2021 21:12:00 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:01 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:02 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                           |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:13 UTC | Fri, 13 Aug 2021 21:12:13 UTC |
	|         | default-k8s-different-port-20210813210121-393438           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:12:23 UTC |
	|         | old-k8s-version-20210813205952-393438                      |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                         |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:40 UTC | Fri, 13 Aug 2021 21:12:40 UTC |
	|         | old-k8s-version-20210813205952-393438                      |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:40 UTC | Fri, 13 Aug 2021 21:12:41 UTC |
	|         | default-k8s-different-port-20210813210121-393438           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:41 UTC | Fri, 13 Aug 2021 21:12:41 UTC |
	|         | default-k8s-different-port-20210813210121-393438           |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205952-393438                      | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:43 UTC | Fri, 13 Aug 2021 21:12:44 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205952-393438                      | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:45 UTC | Fri, 13 Aug 2021 21:12:46 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:47 UTC | Fri, 13 Aug 2021 21:12:48 UTC |
	|         | old-k8s-version-20210813205952-393438                      |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:48 UTC | Fri, 13 Aug 2021 21:12:48 UTC |
	|         | old-k8s-version-20210813205952-393438                      |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:13:29 UTC |
	|         | embed-certs-20210813210115-393438                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:43 UTC | Fri, 13 Aug 2021 21:13:44 UTC |
	|         | embed-certs-20210813210115-393438                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20210813211202-393438 --memory=2200          | newest-cni-20210813211202-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:02 UTC | Fri, 13 Aug 2021 21:13:48 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=containerd              |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813211202-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:48 UTC | Fri, 13 Aug 2021 21:13:49 UTC |
	|         | newest-cni-20210813211202-393438                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813211202-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:49 UTC | Fri, 13 Aug 2021 21:13:53 UTC |
	|         | newest-cni-20210813211202-393438                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813211202-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:53 UTC | Fri, 13 Aug 2021 21:13:53 UTC |
	|         | newest-cni-20210813211202-393438                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:13:54
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:13:54.020451  437512 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:13:54.020538  437512 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:13:54.020546  437512 out.go:311] Setting ErrFile to fd 2...
	I0813 21:13:54.020550  437512 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:13:54.020637  437512 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:13:54.020903  437512 out.go:305] Setting JSON to false
	I0813 21:13:54.055358  437512 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6996,"bootTime":1628882238,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:13:54.055425  437512 start.go:121] virtualization: kvm guest
	I0813 21:13:54.057737  437512 out.go:177] * [newest-cni-20210813211202-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:13:54.059671  437512 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:13:54.057886  437512 notify.go:169] Checking for updates...
	I0813 21:13:54.061068  437512 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:13:54.062326  437512 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:13:54.063561  437512 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:13:54.064047  437512 config.go:177] Loaded profile config "newest-cni-20210813211202-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:13:54.064501  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:54.064565  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:54.074934  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0813 21:13:54.075376  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:54.075897  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:13:54.075921  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:54.076299  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:54.076480  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:13:54.076619  437512 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:13:54.076921  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:54.076956  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:54.094270  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42319
	I0813 21:13:54.094640  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:54.095079  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:13:54.095100  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:54.095429  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:54.095643  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:13:54.124933  437512 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 21:13:54.124956  437512 start.go:278] selected driver: kvm2
	I0813 21:13:54.124963  437512 start.go:751] validating driver "kvm2" against &{Name:newest-cni-20210813211202-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813211202-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:
map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:13:54.125108  437512 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:13:54.126071  437512 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:13:54.126236  437512 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:13:54.136552  437512 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:13:54.136862  437512 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 21:13:54.136884  437512 cni.go:93] Creating CNI manager for ""
	I0813 21:13:54.136890  437512 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:13:54.136897  437512 start_flags.go:277] config:
	{Name:newest-cni-20210813211202-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210813211202-393438 Namespa
ce:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_runnin
g:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:13:54.137556  437512 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:13:54.139643  437512 out.go:177] * Starting control plane node newest-cni-20210813211202-393438 in cluster newest-cni-20210813211202-393438
	I0813 21:13:54.139674  437512 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 21:13:54.139702  437512 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0813 21:13:54.139730  437512 cache.go:56] Caching tarball of preloaded images
	I0813 21:13:54.139821  437512 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 21:13:54.139837  437512 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0813 21:13:54.139934  437512 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/newest-cni-20210813211202-393438/config.json ...
	I0813 21:13:54.140075  437512 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:13:54.140101  437512 start.go:313] acquiring machines lock for newest-cni-20210813211202-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:13:54.140150  437512 start.go:317] acquired machines lock for "newest-cni-20210813211202-393438" in 35.331µs
	I0813 21:13:54.140167  437512 start.go:93] Skipping create...Using existing machine configuration
	I0813 21:13:54.140177  437512 fix.go:55] fixHost starting: 
	I0813 21:13:54.140467  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:13:54.140498  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:13:54.150511  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0813 21:13:54.150999  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:13:54.151429  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:13:54.151448  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:13:54.151853  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:13:54.152049  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:13:54.152212  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetState
	I0813 21:13:54.154944  437512 fix.go:108] recreateIfNeeded on newest-cni-20210813211202-393438: state=Stopped err=<nil>
	I0813 21:13:54.154977  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	W0813 21:13:54.155118  437512 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 21:13:52.620358  436296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:13:52.620484  436296 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:13:52.633463  436296 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:13:52.650444  436296 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:13:52.650524  436296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:52.650572  436296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=auto-20210813205925-393438 minikube.k8s.io/updated_at=2021_08_13T21_13_52_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:52.985314  436296 ops.go:34] apiserver oom_adj: -16
	I0813 21:13:52.985516  436296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:53.589979  436296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:54.090290  436296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:54.590136  436296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:55.089989  436296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:55.590206  436296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:56.089414  436296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:56.589923  436296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:13:54.156943  437512 out.go:177] * Restarting existing kvm2 VM for "newest-cni-20210813211202-393438" ...
	I0813 21:13:54.156978  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Start
	I0813 21:13:54.157137  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Ensuring networks are active...
	I0813 21:13:54.158967  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Ensuring network default is active
	I0813 21:13:54.159315  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Ensuring network mk-newest-cni-20210813211202-393438 is active
	I0813 21:13:54.159711  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Getting domain xml...
	I0813 21:13:54.161596  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Creating domain...
	I0813 21:13:54.565085  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Waiting to get IP...
	I0813 21:13:54.565829  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:13:54.566364  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has current primary IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:13:54.566401  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Found IP for machine: 192.168.61.119
	I0813 21:13:54.566425  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Reserving static IP address...
	I0813 21:13:54.566903  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Reserved static IP address: 192.168.61.119
	I0813 21:13:54.566952  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "newest-cni-20210813211202-393438", mac: "52:54:00:cc:cf:c7", ip: "192.168.61.119"} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:13:54.566971  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Waiting for SSH to be available...
	I0813 21:13:54.567030  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | skip adding static IP to network mk-newest-cni-20210813211202-393438 - found existing host DHCP lease matching {name: "newest-cni-20210813211202-393438", mac: "52:54:00:cc:cf:c7", ip: "192.168.61.119"}
	I0813 21:13:54.567064  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Getting to WaitForSSH function...
	I0813 21:13:54.571391  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:13:54.571698  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:12:18 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:13:54.571731  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:13:54.571854  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Using SSH client type: external
	I0813 21:13:54.571900  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa (-rw-------)
	I0813 21:13:54.571939  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:13:54.571969  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | About to run SSH command:
	I0813 21:13:54.571985  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | exit 0
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	fee4d3f11b1ed       523cad1a4df73       18 seconds ago       Exited              dashboard-metrics-scraper   1                   2757524da210c
	34b7bed0566a4       523cad1a4df73       20 seconds ago       Exited              dashboard-metrics-scraper   0                   2757524da210c
	feba03c02cdd6       9a07b5b4bfac0       32 seconds ago       Running             kubernetes-dashboard        0                   6c02d5e3e4d66
	f0d1f74ac1b6b       6e38f40d628db       32 seconds ago       Exited              storage-provisioner         0                   1a65131b1565e
	fce52cb2f6260       296a6d5035e2d       36 seconds ago       Running             coredns                     0                   bf382a6a87c89
	f982e62ab4f99       adb2816ea823a       37 seconds ago       Running             kube-proxy                  0                   386546988b9ed
	875436dc90a14       bc2bb319a7038       About a minute ago   Running             kube-controller-manager     0                   c8063b597e446
	699b039e9f9b1       6be0dc1302e30       About a minute ago   Running             kube-scheduler              0                   11f1b554ea9b0
	6b45b9874af45       0369cf4303ffd       About a minute ago   Running             etcd                        0                   3096fb97d92c5
	7eeee683347cf       3d174f00aa39e       About a minute ago   Running             kube-apiserver              0                   2bed3213c62a1
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 21:05:42 UTC, end at Fri 2021-08-13 21:13:59 UTC. --
	Aug 13 21:13:39 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:39.717882878Z" level=info msg="CreateContainer within sandbox \"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a\""
	Aug 13 21:13:39 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:39.726898624Z" level=info msg="StartContainer for \"34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a\""
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.339967870Z" level=info msg="StartContainer for \"34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a\" returns successfully"
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.406667324Z" level=info msg="Finish piping stderr of container \"34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a\""
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.406910822Z" level=info msg="Finish piping stdout of container \"34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a\""
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.409336125Z" level=info msg="TaskExit event &TaskExit{ContainerID:34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a,ID:34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a,Pid:7018,ExitStatus:1,ExitedAt:2021-08-13 21:13:40.408125691 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.495193651Z" level=info msg="shim disconnected" id=34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.495838003Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 13 21:13:40 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:40.937053896Z" level=info msg="CreateContainer within sandbox \"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 13 21:13:41 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:41.362284757Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 13 21:13:43 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:43.539891101Z" level=info msg="CreateContainer within sandbox \"2757524da210c592531828bb2a3e5fdd78431aa4b5c3c02b82b2bae966158b6f\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051\""
	Aug 13 21:13:43 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:43.542172617Z" level=info msg="StartContainer for \"fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051\""
	Aug 13 21:13:43 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:43.637049269Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 13 21:13:43 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:43.670044750Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.246582040Z" level=info msg="StartContainer for \"fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051\" returns successfully"
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.284334110Z" level=info msg="TaskExit event &TaskExit{ContainerID:fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051,ID:fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051,Pid:7088,ExitStatus:1,ExitedAt:2021-08-13 21:13:44.283092537 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.285287363Z" level=info msg="Finish piping stderr of container \"fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051\""
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.285866466Z" level=info msg="Finish piping stdout of container \"fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051\""
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.362143446Z" level=info msg="shim disconnected" id=fee4d3f11b1ed4d96756fe61b7721832ebbd01c655a61a4738c87ff7eb525051
	Aug 13 21:13:44 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:44.362281103Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 13 21:13:57 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:57.700325896Z" level=info msg="Finish piping stderr of container \"f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c\""
	Aug 13 21:13:57 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:57.701009105Z" level=info msg="Finish piping stdout of container \"f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c\""
	Aug 13 21:13:57 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:57.704971052Z" level=info msg="TaskExit event &TaskExit{ContainerID:f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c,ID:f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c,Pid:6845,ExitStatus:255,ExitedAt:2021-08-13 21:13:57.703288892 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 21:13:57 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:57.776316628Z" level=info msg="shim disconnected" id=f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c
	Aug 13 21:13:57 embed-certs-20210813210115-393438 containerd[2160]: time="2021-08-13T21:13:57.776635321Z" level=error msg="copy shim log" error="read /proc/self/fd/110: file already closed"
	
	* 
	* ==> coredns [fce52cb2f6260fe2a7deba36aaf9964a59e74e72c07a3aff48c451a6cd913c5a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	*               on the kernel command line
	[  +0.000022] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +5.163103] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.038045] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.934417] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1719 comm=systemd-network
	[  +0.887909] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.291177] vboxguest: loading out-of-tree module taints kernel.
	[  +0.011560] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug13 21:06] systemd-fstab-generator[2073]: Ignoring "noauto" for root device
	[  +0.299806] systemd-fstab-generator[2103]: Ignoring "noauto" for root device
	[  +0.147130] systemd-fstab-generator[2118]: Ignoring "noauto" for root device
	[  +0.260000] systemd-fstab-generator[2149]: Ignoring "noauto" for root device
	[  +6.278816] systemd-fstab-generator[2341]: Ignoring "noauto" for root device
	[Aug13 21:07] NFSD: Unable to end grace period: -110
	[ +10.458309] kauditd_printk_skb: 38 callbacks suppressed
	[Aug13 21:08] kauditd_printk_skb: 95 callbacks suppressed
	[Aug13 21:12] systemd-fstab-generator[5466]: Ignoring "noauto" for root device
	[Aug13 21:13] systemd-fstab-generator[5883]: Ignoring "noauto" for root device
	[ +20.617504] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.000864] kauditd_printk_skb: 68 callbacks suppressed
	[ +14.178877] kauditd_printk_skb: 8 callbacks suppressed
	[  +3.334235] systemd-fstab-generator[7138]: Ignoring "noauto" for root device
	[  +0.822151] systemd-fstab-generator[7191]: Ignoring "noauto" for root device
	[  +1.159607] systemd-fstab-generator[7244]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [6b45b9874af459784c643fc956517fcd2c6279d9ccc3fc28acae01a541df9c3c] <==
	* 2021-08-13 21:13:19.420432 W | etcdserver: request "header:<ID:10617934592130892125 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-20210813210115-393438\" mod_revision:326 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-20210813210115-393438\" value_size:6637 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-20210813210115-393438\" > >>" with result "size:16" took too long (592.01384ms) to execute
	2021-08-13 21:13:19.605699 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.001917504s) to execute
	2021-08-13 21:13:19.606103 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-20210813210115-393438\" " with result "range_response_count:1 size:6735" took too long (173.152403ms) to execute
	2021-08-13 21:13:19.606316 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/service-controller\" " with result "range_response_count:0 size:5" took too long (165.873818ms) to execute
	2021-08-13 21:13:19.606618 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.155473371s) to execute
	2021-08-13 21:13:19.609661 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20210813210115-393438.169afa17a5ded5a6\" " with result "range_response_count:1 size:895" took too long (1.170325521s) to execute
	2021-08-13 21:13:27.958921 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 21:13:36.726352 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000322354s) to execute
	WARNING: 2021/08/13 21:13:36 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-13 21:13:37.309446 W | wal: sync duration of 3.314706773s, expected less than 1s
	2021-08-13 21:13:37.900166 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:7838" took too long (3.417001855s) to execute
	2021-08-13 21:13:37.900890 W | etcdserver: request "header:<ID:10617934592130892712 > lease_revoke:<id:135a7b415c4b419e>" with result "size:28" took too long (590.17155ms) to execute
	2021-08-13 21:13:37.901082 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1131" took too long (1.887400452s) to execute
	2021-08-13 21:13:37.901444 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20210813210115-393438.169afa17a5ded5a6\" " with result "range_response_count:1 size:895" took too long (730.346919ms) to execute
	2021-08-13 21:13:37.902109 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.174335828s) to execute
	2021-08-13 21:13:38.961813 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	2021-08-13 21:13:39.340358 W | wal: sync duration of 1.390341525s, expected less than 1s
	2021-08-13 21:13:39.364055 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:7838" took too long (415.65776ms) to execute
	2021-08-13 21:13:39.381863 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (660.579855ms) to execute
	2021-08-13 21:13:43.538667 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.818997612s) to execute
	2021-08-13 21:13:43.539311 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.221487782s) to execute
	2021-08-13 21:13:43.542827 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:8195" took too long (1.589978556s) to execute
	2021-08-13 21:13:43.542999 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1131" took too long (1.530664236s) to execute
	2021-08-13 21:13:43.543165 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-7c784ccb57-2bkk5\" " with result "range_response_count:1 size:4409" took too long (2.179910158s) to execute
	2021-08-13 21:13:43.543361 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-7c784ccb57-2bkk5.169afa1a72cf0e7f\" " with result "range_response_count:1 size:862" took too long (2.175445564s) to execute
	
	* 
	* ==> kernel <==
	*  21:14:10 up 8 min,  0 users,  load average: 1.61, 0.99, 0.46
	Linux embed-certs-20210813210115-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [7eeee683347cfe2dc0e4ea2c21f8d6e68a26e746c44ef78f09078c9cfbd6bb69] <==
	* Trace[244210469]: ---"Object stored in database" 928ms (21:13:00.386)
	Trace[244210469]: [933.629175ms] [933.629175ms] END
	I0813 21:13:39.392667       1 trace.go:205] Trace[222088877]: "GuaranteedUpdate etcd3" type:*coordination.Lease (13-Aug-2021 21:13:38.660) (total time: 731ms):
	Trace[222088877]: ---"Transaction committed" 731ms (21:13:00.392)
	Trace[222088877]: [731.939748ms] [731.939748ms] END
	I0813 21:13:39.392796       1 trace.go:205] Trace[1904221178]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-20210813210115-393438,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.72.95,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 21:13:38.660) (total time: 732ms):
	Trace[1904221178]: ---"Object stored in database" 732ms (21:13:00.392)
	Trace[1904221178]: [732.581644ms] [732.581644ms] END
	I0813 21:13:43.547227       1 trace.go:205] Trace[1624093536]: "List etcd3" key:/pods/kubernetes-dashboard,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 21:13:41.947) (total time: 1599ms):
	Trace[1624093536]: [1.599810701s] [1.599810701s] END
	I0813 21:13:43.547846       1 trace.go:205] Trace[31110861]: "List" url:/api/v1/namespaces/kubernetes-dashboard/pods,user-agent:e2e-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.72.1,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 21:13:41.947) (total time: 1600ms):
	Trace[31110861]: ---"Listing from storage done" 1599ms (21:13:00.547)
	Trace[31110861]: [1.600479322s] [1.600479322s] END
	I0813 21:13:43.548136       1 trace.go:205] Trace[1408934217]: "Get" url:/api/v1/namespaces/kube-system/pods/metrics-server-7c784ccb57-2bkk5,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.72.95,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 21:13:41.358) (total time: 2190ms):
	Trace[1408934217]: ---"About to write a response" 2186ms (21:13:00.544)
	Trace[1408934217]: [2.190096646s] [2.190096646s] END
	I0813 21:13:43.550204       1 trace.go:205] Trace[1102788854]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.72.95,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 21:13:42.007) (total time: 1542ms):
	Trace[1102788854]: ---"About to write a response" 1542ms (21:13:00.550)
	Trace[1102788854]: [1.542934792s] [1.542934792s] END
	I0813 21:13:43.553301       1 trace.go:205] Trace[612282577]: "GuaranteedUpdate etcd3" type:*core.Event (13-Aug-2021 21:13:41.362) (total time: 2190ms):
	Trace[612282577]: ---"initial value restored" 2183ms (21:13:00.545)
	Trace[612282577]: [2.190558859s] [2.190558859s] END
	I0813 21:13:43.553454       1 trace.go:205] Trace[1330177402]: "Patch" url:/api/v1/namespaces/kube-system/events/metrics-server-7c784ccb57-2bkk5.169afa1a72cf0e7f,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.72.95,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 21:13:41.362) (total time: 2191ms):
	Trace[1330177402]: ---"About to apply patch" 2183ms (21:13:00.545)
	Trace[1330177402]: [2.191102717s] [2.191102717s] END
	
	* 
	* ==> kube-controller-manager [875436dc90a14667172cd52768e9ab1793690dd57a9e1a4bdc11fbd859762c18] <==
	* I0813 21:13:25.180182       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-2bkk5"
	I0813 21:13:25.410348       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 21:13:25.477057       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:13:25.488685       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 21:13:25.502904       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.589182       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.591738       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.592039       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.655451       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.655848       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.664892       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:13:25.760237       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.766097       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.773912       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.774119       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.850284       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.851244       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:13:25.859159       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.859419       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 21:13:25.899393       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.899624       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 21:13:25.900010       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 21:13:25.900141       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 21:13:25.986617       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-vfmn7"
	I0813 21:13:26.000386       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-67f8b"
	
	* 
	* ==> kube-proxy [f982e62ab4f99087c18d7cd4ee9906d0d2e40a1562275abb62ba8fe27519ed7b] <==
	* I0813 21:13:23.266077       1 node.go:172] Successfully retrieved node IP: 192.168.72.95
	I0813 21:13:23.266145       1 server_others.go:140] Detected node IP 192.168.72.95
	W0813 21:13:23.266182       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 21:13:23.825651       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 21:13:23.825768       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 21:13:23.825785       1 server_others.go:212] Using iptables Proxier.
	I0813 21:13:23.836104       1 server.go:643] Version: v1.21.3
	I0813 21:13:23.837231       1 config.go:315] Starting service config controller
	I0813 21:13:23.837405       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0813 21:13:23.870324       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 21:13:23.873987       1 config.go:224] Starting endpoint slice config controller
	I0813 21:13:23.873999       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0813 21:13:23.874006       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	W0813 21:13:23.883128       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 21:13:23.938593       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [699b039e9f9b1d4823f2026e92046bbbb789221f97946f45e1b82beb63916fef] <==
	* E0813 21:12:58.219794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:58.220283       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:12:58.220692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:12:58.221083       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:12:58.221588       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:12:58.222033       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:58.222086       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:12:58.222145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:58.222192       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:12:58.225870       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:12:58.226457       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:12:58.227197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:12:59.145974       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 21:12:59.220852       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:12:59.230721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 21:12:59.315601       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:59.371690       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 21:12:59.503452       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:12:59.525707       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:12:59.546836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:12:59.565060       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:59.576665       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:12:59.577912       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:12:59.674811       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0813 21:13:02.706776       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:05:42 UTC, end at Fri 2021-08-13 21:14:10 UTC. --
	Aug 13 21:13:25 embed-certs-20210813210115-393438 kubelet[5892]: W0813 21:13:25.212962    5892 container.go:586] Failed to update stats for container "/kubepods/burstable/podfc0c5961-f1c7-4e5b-8c73-ec11bcd71140": /sys/fs/cgroup/cpuset/kubepods/burstable/podfc0c5961-f1c7-4e5b-8c73-ec11bcd71140/cpuset.cpus found to be empty, continuing to push stats
	Aug 13 21:13:25 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:25.293677    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt4tg\" (UniqueName: \"kubernetes.io/projected/fc0c5961-f1c7-4e5b-8c73-ec11bcd71140-kube-api-access-tt4tg\") pod \"metrics-server-7c784ccb57-2bkk5\" (UID: \"fc0c5961-f1c7-4e5b-8c73-ec11bcd71140\") "
	Aug 13 21:13:25 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:25.293857    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fc0c5961-f1c7-4e5b-8c73-ec11bcd71140-tmp-dir\") pod \"metrics-server-7c784ccb57-2bkk5\" (UID: \"fc0c5961-f1c7-4e5b-8c73-ec11bcd71140\") "
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.025677    5892 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.047407    5892 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.100802    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfpk\" (UniqueName: \"kubernetes.io/projected/dec6e08d-aa7f-4511-ae76-036fb08eb5f0-kube-api-access-qhfpk\") pod \"dashboard-metrics-scraper-8685c45546-67f8b\" (UID: \"dec6e08d-aa7f-4511-ae76-036fb08eb5f0\") "
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.101169    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/dec6e08d-aa7f-4511-ae76-036fb08eb5f0-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-67f8b\" (UID: \"dec6e08d-aa7f-4511-ae76-036fb08eb5f0\") "
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.101200    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/92209727-a9d1-4943-a8c8-f0d00da0b005-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-vfmn7\" (UID: \"92209727-a9d1-4943-a8c8-f0d00da0b005\") "
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:26.101222    5892 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdlk2\" (UniqueName: \"kubernetes.io/projected/92209727-a9d1-4943-a8c8-f0d00da0b005-kube-api-access-vdlk2\") pod \"kubernetes-dashboard-6fcdf4f6d-vfmn7\" (UID: \"92209727-a9d1-4943-a8c8-f0d00da0b005\") "
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:26.765169    5892 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:26.765207    5892 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:26.765350    5892 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tt4tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-2bkk5_kube-system(fc0c5961-f1c7-4e5b-8c73-ec11bcd71140): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:13:26 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:26.765445    5892 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-2bkk5" podUID=fc0c5961-f1c7-4e5b-8c73-ec11bcd71140
	Aug 13 21:13:27 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:27.610991    5892 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-2bkk5" podUID=fc0c5961-f1c7-4e5b-8c73-ec11bcd71140
	Aug 13 21:13:28 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:28.007995    5892 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/podfc0c5961-f1c7-4e5b-8c73-ec11bcd71140\": RecentStats: unable to find data in memory cache]"
	Aug 13 21:13:38 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:38.130114    5892 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/podfc0c5961-f1c7-4e5b-8c73-ec11bcd71140\": RecentStats: unable to find data in memory cache]"
	Aug 13 21:13:40 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:40.882264    5892 scope.go:111] "RemoveContainer" containerID="34b7bed0566a472b86c850bc0bd9425cdf7b86de5a971d9a10d48a78729df35a"
	Aug 13 21:13:43 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:43.672777    5892 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:13:43 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:43.672932    5892 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 21:13:43 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:43.673093    5892 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tt4tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-2bkk5_kube-system(fc0c5961-f1c7-4e5b-8c73-ec11bcd71140): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 13 21:13:43 embed-certs-20210813210115-393438 kubelet[5892]: E0813 21:13:43.673153    5892 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-2bkk5" podUID=fc0c5961-f1c7-4e5b-8c73-ec11bcd71140
	Aug 13 21:13:44 embed-certs-20210813210115-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:13:44 embed-certs-20210813210115-393438 kubelet[5892]: I0813 21:13:44.642848    5892 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 13 21:13:44 embed-certs-20210813210115-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:13:44 embed-certs-20210813210115-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [feba03c02cdd622b87d27aa4411e8cf5b5bf08712b9b1a0d583af4a4fa59c6e6] <==
	* 2021/08/13 21:13:28 Using namespace: kubernetes-dashboard
	2021/08/13 21:13:28 Using in-cluster config to connect to apiserver
	2021/08/13 21:13:28 Using secret token for csrf signing
	2021/08/13 21:13:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 21:13:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 21:13:28 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 21:13:28 Generating JWE encryption key
	2021/08/13 21:13:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 21:13:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 21:13:28 Initializing JWE encryption key from synchronized object
	2021/08/13 21:13:28 Creating in-cluster Sidecar client
	2021/08/13 21:13:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 21:13:28 Serving insecurely on HTTP port: 9090
	2021/08/13 21:13:28 Starting overwatch
	
	* 
	* ==> storage-provisioner [f0d1f74ac1b6b6e6f64dd789f857e0a6950b1f835fd6bf1dd56e0b8ddc7cca7c] <==
	* 	/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00007a300, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000454780, 0x18e5530, 0xc00048a2c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0003525e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0003525e0, 0x18b3d60, 0xc0003540f0, 0x1, 0xc00007e5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003525e0, 0x3b9aca00, 0x0, 0x1, 0xc00007e5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0003525e0, 0x3b9aca00, 0xc00007e5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 109 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0xc0003fac80, 0xc00003c280)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x3b7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 21:14:09.938748  437651 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (26.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (85.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210813211202-393438 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-20210813211202-393438 --alsologtostderr -v=1: exit status 80 (2.41844647s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20210813211202-393438 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 21:15:49.255286  438827 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:15:49.255491  438827 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:15:49.255504  438827 out.go:311] Setting ErrFile to fd 2...
	I0813 21:15:49.255510  438827 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:15:49.255651  438827 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:15:49.255907  438827 out.go:305] Setting JSON to false
	I0813 21:15:49.255931  438827 mustload.go:65] Loading cluster: newest-cni-20210813211202-393438
	I0813 21:15:49.256342  438827 config.go:177] Loaded profile config "newest-cni-20210813211202-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:15:49.256899  438827 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:49.256949  438827 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:49.269300  438827 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46805
	I0813 21:15:49.269786  438827 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:49.270503  438827 main.go:130] libmachine: Using API Version  1
	I0813 21:15:49.270529  438827 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:49.271033  438827 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:49.271234  438827 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetState
	I0813 21:15:49.274875  438827 host.go:66] Checking if "newest-cni-20210813211202-393438" exists ...
	I0813 21:15:49.275313  438827 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:49.275356  438827 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:49.286867  438827 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0813 21:15:49.287265  438827 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:49.287706  438827 main.go:130] libmachine: Using API Version  1
	I0813 21:15:49.287730  438827 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:49.288050  438827 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:49.288199  438827 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:15:49.288765  438827 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-20210813211202-393438 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 21:15:49.291272  438827 out.go:177] * Pausing node newest-cni-20210813211202-393438 ... 
	I0813 21:15:49.291309  438827 host.go:66] Checking if "newest-cni-20210813211202-393438" exists ...
	I0813 21:15:49.291629  438827 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:49.291700  438827 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:49.302828  438827 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0813 21:15:49.303204  438827 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:49.303701  438827 main.go:130] libmachine: Using API Version  1
	I0813 21:15:49.303725  438827 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:49.304107  438827 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:49.304314  438827 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:15:49.304515  438827 ssh_runner.go:149] Run: systemctl --version
	I0813 21:15:49.304545  438827 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:15:49.310660  438827 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:15:49.311110  438827 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:14:05 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:15:49.311137  438827 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:15:49.311247  438827 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:15:49.311390  438827 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:15:49.311512  438827 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:15:49.311620  438827 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa Username:docker}
	I0813 21:15:49.425473  438827 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:15:49.439993  438827 pause.go:50] kubelet running: true
	I0813 21:15:49.440057  438827 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:15:49.686695  438827 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:15:49.686801  438827 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:15:49.855128  438827 cri.go:76] found id: "7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f"
	I0813 21:15:49.855168  438827 cri.go:76] found id: "c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847"
	I0813 21:15:49.855176  438827 cri.go:76] found id: "c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6"
	I0813 21:15:49.855181  438827 cri.go:76] found id: "5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1"
	I0813 21:15:49.855185  438827 cri.go:76] found id: "116848b32df564f7527d30d663d98dc9754820ec664386dba0d7b2d25e56a787"
	I0813 21:15:49.855195  438827 cri.go:76] found id: ""
	I0813 21:15:49.855244  438827 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:15:49.887919  438827 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65","pid":2771,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65/rootfs","created":"2021-08-13T21:15:13.96892618Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813211202-393438_23f20a6ab1f9bfbd5f9cb779fc57f4ac"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e","pid":2580,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/39b7a0897fe8b
6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e/rootfs","created":"2021-08-13T21:14:59.976785271Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210813211202-393438_d3f7300c03150dfa848d9f1a07380707"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1","pid":2810,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1/rootfs","created":"2021-08-13T21:15:14.43872153Z","annotations":{"io.kubernetes.cri.contain
er-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f","pid":2966,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f/rootfs","created":"2021-08-13T21:15:35.066237599Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45","pid":2708,"status":"running","bundle":"/run/containerd/io.conta
inerd.runtime.v2.task/k8s.io/7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45/rootfs","created":"2021-08-13T21:15:12.970733459Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813211202-393438_aaaf016c03e8a038279f21d4d3c45a59"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1","pid":2472,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1/rootfs","created":"2021-08-13T21:14:48.
40799977Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813211202-393438_66db91a7f189464846de3b76c282f6af"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847","pid":2922,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847/rootfs","created":"2021-08-13T21:15:29.028830048Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45"},"owner":"root"},{"ociVersion"
:"1.0.2-dev","id":"c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6","pid":2871,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6/rootfs","created":"2021-08-13T21:15:23.845001289Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e"},"owner":"root"}]
	I0813 21:15:49.888091  438827 cri.go:113] list returned 8 containers
	I0813 21:15:49.888108  438827 cri.go:116] container: {ID:0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65 Status:running}
	I0813 21:15:49.888138  438827 cri.go:118] skipping 0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65 - not in ps
	I0813 21:15:49.888148  438827 cri.go:116] container: {ID:39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e Status:running}
	I0813 21:15:49.888155  438827 cri.go:118] skipping 39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e - not in ps
	I0813 21:15:49.888164  438827 cri.go:116] container: {ID:5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1 Status:running}
	I0813 21:15:49.888171  438827 cri.go:116] container: {ID:7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f Status:running}
	I0813 21:15:49.888180  438827 cri.go:116] container: {ID:7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45 Status:running}
	I0813 21:15:49.888187  438827 cri.go:118] skipping 7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45 - not in ps
	I0813 21:15:49.888197  438827 cri.go:116] container: {ID:88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1 Status:running}
	I0813 21:15:49.888204  438827 cri.go:118] skipping 88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1 - not in ps
	I0813 21:15:49.888209  438827 cri.go:116] container: {ID:c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847 Status:running}
	I0813 21:15:49.888216  438827 cri.go:116] container: {ID:c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6 Status:running}
	I0813 21:15:49.888267  438827 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1
	I0813 21:15:49.913420  438827 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1 7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f
	I0813 21:15:49.933690  438827 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1 7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:15:49Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:15:50.210150  438827 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:15:50.222498  438827 pause.go:50] kubelet running: false
	I0813 21:15:50.222560  438827 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:15:50.390042  438827 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:15:50.390127  438827 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:15:50.539184  438827 cri.go:76] found id: "7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f"
	I0813 21:15:50.539221  438827 cri.go:76] found id: "c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847"
	I0813 21:15:50.539229  438827 cri.go:76] found id: "c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6"
	I0813 21:15:50.539235  438827 cri.go:76] found id: "5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1"
	I0813 21:15:50.539241  438827 cri.go:76] found id: "116848b32df564f7527d30d663d98dc9754820ec664386dba0d7b2d25e56a787"
	I0813 21:15:50.539247  438827 cri.go:76] found id: ""
	I0813 21:15:50.539296  438827 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:15:50.576982  438827 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65","pid":2771,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65/rootfs","created":"2021-08-13T21:15:13.96892618Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813211202-393438_23f20a6ab1f9bfbd5f9cb779fc57f4ac"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e","pid":2580,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/39b7a0897fe8b
6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e/rootfs","created":"2021-08-13T21:14:59.976785271Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210813211202-393438_d3f7300c03150dfa848d9f1a07380707"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1","pid":2810,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1/rootfs","created":"2021-08-13T21:15:14.43872153Z","annotations":{"io.kubernetes.cri.containe
r-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f","pid":2966,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f/rootfs","created":"2021-08-13T21:15:35.066237599Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45","pid":2708,"status":"running","bundle":"/run/containerd/io.contai
nerd.runtime.v2.task/k8s.io/7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45/rootfs","created":"2021-08-13T21:15:12.970733459Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813211202-393438_aaaf016c03e8a038279f21d4d3c45a59"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1","pid":2472,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1/rootfs","created":"2021-08-13T21:14:48.4
0799977Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813211202-393438_66db91a7f189464846de3b76c282f6af"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847","pid":2922,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847/rootfs","created":"2021-08-13T21:15:29.028830048Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6","pid":2871,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6/rootfs","created":"2021-08-13T21:15:23.845001289Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e"},"owner":"root"}]
	I0813 21:15:50.577131  438827 cri.go:113] list returned 8 containers
	I0813 21:15:50.577145  438827 cri.go:116] container: {ID:0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65 Status:running}
	I0813 21:15:50.577159  438827 cri.go:118] skipping 0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65 - not in ps
	I0813 21:15:50.577172  438827 cri.go:116] container: {ID:39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e Status:running}
	I0813 21:15:50.577187  438827 cri.go:118] skipping 39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e - not in ps
	I0813 21:15:50.577193  438827 cri.go:116] container: {ID:5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1 Status:paused}
	I0813 21:15:50.577199  438827 cri.go:122] skipping {5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1 paused}: state = "paused", want "running"
	I0813 21:15:50.577212  438827 cri.go:116] container: {ID:7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f Status:running}
	I0813 21:15:50.577218  438827 cri.go:116] container: {ID:7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45 Status:running}
	I0813 21:15:50.577224  438827 cri.go:118] skipping 7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45 - not in ps
	I0813 21:15:50.577229  438827 cri.go:116] container: {ID:88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1 Status:running}
	I0813 21:15:50.577235  438827 cri.go:118] skipping 88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1 - not in ps
	I0813 21:15:50.577241  438827 cri.go:116] container: {ID:c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847 Status:running}
	I0813 21:15:50.577258  438827 cri.go:116] container: {ID:c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6 Status:running}
	I0813 21:15:50.577311  438827 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f
	I0813 21:15:50.602170  438827 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847
	I0813 21:15:50.623988  438827 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:15:50Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0813 21:15:51.164732  438827 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:15:51.177741  438827 pause.go:50] kubelet running: false
	I0813 21:15:51.177808  438827 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 21:15:51.364297  438827 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 21:15:51.364403  438827 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 21:15:51.515288  438827 cri.go:76] found id: "7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f"
	I0813 21:15:51.515318  438827 cri.go:76] found id: "c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847"
	I0813 21:15:51.515325  438827 cri.go:76] found id: "c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6"
	I0813 21:15:51.515331  438827 cri.go:76] found id: "5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1"
	I0813 21:15:51.515336  438827 cri.go:76] found id: "116848b32df564f7527d30d663d98dc9754820ec664386dba0d7b2d25e56a787"
	I0813 21:15:51.515343  438827 cri.go:76] found id: ""
	I0813 21:15:51.515401  438827 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 21:15:51.553547  438827 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65","pid":2771,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65/rootfs","created":"2021-08-13T21:15:13.96892618Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210813211202-393438_23f20a6ab1f9bfbd5f9cb779fc57f4ac"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e","pid":2580,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/39b7a0897fe8b
6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e/rootfs","created":"2021-08-13T21:14:59.976785271Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210813211202-393438_d3f7300c03150dfa848d9f1a07380707"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1","pid":2810,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1/rootfs","created":"2021-08-13T21:15:14.43872153Z","annotations":{"io.kubernetes.cri.containe
r-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f","pid":2966,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f/rootfs","created":"2021-08-13T21:15:35.066237599Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45","pid":2708,"status":"running","bundle":"/run/containerd/io.contain
erd.runtime.v2.task/k8s.io/7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45/rootfs","created":"2021-08-13T21:15:12.970733459Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210813211202-393438_aaaf016c03e8a038279f21d4d3c45a59"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1","pid":2472,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1/rootfs","created":"2021-08-13T21:14:48.40
799977Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210813211202-393438_66db91a7f189464846de3b76c282f6af"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847","pid":2922,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847/rootfs","created":"2021-08-13T21:15:29.028830048Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45"},"owner":"root"},{"ociVersion":"
1.0.2-dev","id":"c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6","pid":2871,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6/rootfs","created":"2021-08-13T21:15:23.845001289Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e"},"owner":"root"}]
	I0813 21:15:51.553700  438827 cri.go:113] list returned 8 containers
	I0813 21:15:51.553716  438827 cri.go:116] container: {ID:0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65 Status:running}
	I0813 21:15:51.553740  438827 cri.go:118] skipping 0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65 - not in ps
	I0813 21:15:51.553751  438827 cri.go:116] container: {ID:39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e Status:running}
	I0813 21:15:51.553759  438827 cri.go:118] skipping 39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e - not in ps
	I0813 21:15:51.553764  438827 cri.go:116] container: {ID:5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1 Status:paused}
	I0813 21:15:51.553777  438827 cri.go:122] skipping {5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1 paused}: state = "paused", want "running"
	I0813 21:15:51.553792  438827 cri.go:116] container: {ID:7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f Status:paused}
	I0813 21:15:51.553804  438827 cri.go:122] skipping {7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f paused}: state = "paused", want "running"
	I0813 21:15:51.553815  438827 cri.go:116] container: {ID:7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45 Status:running}
	I0813 21:15:51.553822  438827 cri.go:118] skipping 7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45 - not in ps
	I0813 21:15:51.553828  438827 cri.go:116] container: {ID:88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1 Status:running}
	I0813 21:15:51.553836  438827 cri.go:118] skipping 88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1 - not in ps
	I0813 21:15:51.553842  438827 cri.go:116] container: {ID:c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847 Status:running}
	I0813 21:15:51.553848  438827 cri.go:116] container: {ID:c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6 Status:running}
	I0813 21:15:51.553908  438827 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847
	I0813 21:15:51.573235  438827 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847 c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6
	I0813 21:15:51.602340  438827 out.go:177] 
	W0813 21:15:51.602577  438827 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847 c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:15:51Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847 c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T21:15:51Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0813 21:15:51.602596  438827 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 21:15:51.606409  438827 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 21:15:51.607809  438827 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p newest-cni-20210813211202-393438 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813211202-393438 -n newest-cni-20210813211202-393438
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813211202-393438 -n newest-cni-20210813211202-393438: exit status 2 (261.724743ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210813211202-393438 logs -n 25
E0813 21:15:53.488277  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:16:02.230693  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210813211202-393438 logs -n 25: exit status 110 (40.985575771s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | no-preload-20210813210044-393438                           | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:11:58 UTC | Fri, 13 Aug 2021 21:12:00 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:01 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210813210044-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:02 UTC | Fri, 13 Aug 2021 21:12:02 UTC |
	|         | no-preload-20210813210044-393438                           |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:13 UTC | Fri, 13 Aug 2021 21:12:13 UTC |
	|         | default-k8s-different-port-20210813210121-393438           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:04:27 UTC | Fri, 13 Aug 2021 21:12:23 UTC |
	|         | old-k8s-version-20210813205952-393438                      |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                  |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                  |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                  |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                         |                                                  |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:40 UTC | Fri, 13 Aug 2021 21:12:40 UTC |
	|         | old-k8s-version-20210813205952-393438                      |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:40 UTC | Fri, 13 Aug 2021 21:12:41 UTC |
	|         | default-k8s-different-port-20210813210121-393438           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813210121-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:41 UTC | Fri, 13 Aug 2021 21:12:41 UTC |
	|         | default-k8s-different-port-20210813210121-393438           |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205952-393438                      | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:43 UTC | Fri, 13 Aug 2021 21:12:44 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205952-393438                      | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:45 UTC | Fri, 13 Aug 2021 21:12:46 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:47 UTC | Fri, 13 Aug 2021 21:12:48 UTC |
	|         | old-k8s-version-20210813205952-393438                      |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205952-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:48 UTC | Fri, 13 Aug 2021 21:12:48 UTC |
	|         | old-k8s-version-20210813205952-393438                      |                                                  |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:13:29 UTC |
	|         | embed-certs-20210813210115-393438                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:43 UTC | Fri, 13 Aug 2021 21:13:44 UTC |
	|         | embed-certs-20210813210115-393438                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20210813211202-393438 --memory=2200          | newest-cni-20210813211202-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:02 UTC | Fri, 13 Aug 2021 21:13:48 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=containerd              |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813211202-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:48 UTC | Fri, 13 Aug 2021 21:13:49 UTC |
	|         | newest-cni-20210813211202-393438                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813211202-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:49 UTC | Fri, 13 Aug 2021 21:13:53 UTC |
	|         | newest-cni-20210813211202-393438                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813211202-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:53 UTC | Fri, 13 Aug 2021 21:13:53 UTC |
	|         | newest-cni-20210813211202-393438                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:14:10 UTC | Fri, 13 Aug 2021 21:14:11 UTC |
	|         | embed-certs-20210813210115-393438                          |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813210115-393438                | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:14:11 UTC | Fri, 13 Aug 2021 21:14:11 UTC |
	|         | embed-certs-20210813210115-393438                          |                                                  |         |         |                               |                               |
	| start   | -p auto-20210813205925-393438                              | auto-20210813205925-393438                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:42 UTC | Fri, 13 Aug 2021 21:14:48 UTC |
	|         | --memory=2048                                              |                                                  |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                  |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                  |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                  |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                  |         |         |                               |                               |
	| ssh     | -p auto-20210813205925-393438                              | auto-20210813205925-393438                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:14:48 UTC | Fri, 13 Aug 2021 21:14:48 UTC |
	|         | pgrep -a kubelet                                           |                                                  |         |         |                               |                               |
	| delete  | -p auto-20210813205925-393438                              | auto-20210813205925-393438                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:15:03 UTC | Fri, 13 Aug 2021 21:15:04 UTC |
	| start   | -p newest-cni-20210813211202-393438 --memory=2200          | newest-cni-20210813211202-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:54 UTC | Fri, 13 Aug 2021 21:15:48 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=containerd              |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813211202-393438                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:15:48 UTC | Fri, 13 Aug 2021 21:15:49 UTC |
	|         | newest-cni-20210813211202-393438                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:15:04
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:15:04.478904  438411 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:15:04.479011  438411 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:15:04.479022  438411 out.go:311] Setting ErrFile to fd 2...
	I0813 21:15:04.479026  438411 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:15:04.479161  438411 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:15:04.479490  438411 out.go:305] Setting JSON to false
	I0813 21:15:04.516724  438411 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":7067,"bootTime":1628882238,"procs":179,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:15:04.516860  438411 start.go:121] virtualization: kvm guest
	I0813 21:15:04.519338  438411 out.go:177] * [custom-weave-20210813205926-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:15:04.520773  438411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:15:04.519495  438411 notify.go:169] Checking for updates...
	I0813 21:15:04.522183  438411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:15:04.523495  438411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:15:04.524888  438411 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:15:04.525358  438411 config.go:177] Loaded profile config "calico-20210813205926-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:15:04.525452  438411 config.go:177] Loaded profile config "cilium-20210813205926-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:15:04.525571  438411 config.go:177] Loaded profile config "newest-cni-20210813211202-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:15:04.525617  438411 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:15:04.561676  438411 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 21:15:04.561701  438411 start.go:278] selected driver: kvm2
	I0813 21:15:04.561706  438411 start.go:751] validating driver "kvm2" against <nil>
	I0813 21:15:04.561722  438411 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:15:04.562872  438411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:15:04.563028  438411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:15:04.574286  438411 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:15:04.574359  438411 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 21:15:04.574556  438411 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 21:15:04.574589  438411 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0813 21:15:04.574622  438411 start_flags.go:272] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0813 21:15:04.574635  438411 start_flags.go:277] config:
	{Name:custom-weave-20210813205926-393438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813205926-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:15:04.574814  438411 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:15:04.576728  438411 out.go:177] * Starting control plane node custom-weave-20210813205926-393438 in cluster custom-weave-20210813205926-393438
	I0813 21:15:04.576766  438411 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:15:04.576809  438411 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 21:15:04.576833  438411 cache.go:56] Caching tarball of preloaded images
	I0813 21:15:04.576934  438411 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0813 21:15:04.576953  438411 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 21:15:04.577043  438411 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/config.json ...
	I0813 21:15:04.577069  438411 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/config.json: {Name:mkd2027f8d6d0c39adc68189ff9ab2bcddf0159f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:15:04.577204  438411 cache.go:205] Successfully downloaded all kic artifacts
	I0813 21:15:04.577231  438411 start.go:313] acquiring machines lock for custom-weave-20210813205926-393438: {Name:mk8bf9f7b0c4b5b470b774aec39ccd1ea980ebef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 21:15:04.577268  438411 start.go:317] acquired machines lock for "custom-weave-20210813205926-393438" in 26.12µs
	I0813 21:15:04.577287  438411 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20210813205926-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813205926-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:15:04.577341  438411 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 21:15:04.197767  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:06.623508  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:08.635340  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:04.341343  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:04.842160  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:05.341206  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:05.841495  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:06.341974  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:06.841590  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:07.341913  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:07.842080  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:08.341946  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:08.842061  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:04.579100  438411 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 21:15:04.579222  438411 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:04.579260  438411 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:04.589167  438411 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40783
	I0813 21:15:04.589593  438411 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:04.590142  438411 main.go:130] libmachine: Using API Version  1
	I0813 21:15:04.590164  438411 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:04.590521  438411 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:04.590719  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetMachineName
	I0813 21:15:04.590844  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .DriverName
	I0813 21:15:04.590986  438411 start.go:160] libmachine.API.Create for "custom-weave-20210813205926-393438" (driver="kvm2")
	I0813 21:15:04.591016  438411 client.go:168] LocalClient.Create starting
	I0813 21:15:04.591051  438411 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 21:15:04.591086  438411 main.go:130] libmachine: Decoding PEM data...
	I0813 21:15:04.591102  438411 main.go:130] libmachine: Parsing certificate...
	I0813 21:15:04.591198  438411 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 21:15:04.591219  438411 main.go:130] libmachine: Decoding PEM data...
	I0813 21:15:04.591229  438411 main.go:130] libmachine: Parsing certificate...
	I0813 21:15:04.591304  438411 main.go:130] libmachine: Running pre-create checks...
	I0813 21:15:04.591316  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .PreCreateCheck
	I0813 21:15:04.591599  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetConfigRaw
	I0813 21:15:04.592032  438411 main.go:130] libmachine: Creating machine...
	I0813 21:15:04.592048  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .Create
	I0813 21:15:04.592191  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Creating KVM machine...
	I0813 21:15:04.595135  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found existing default KVM network
	I0813 21:15:04.597009  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:04.596874  438434 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0000a05d8] misses:0}
	I0813 21:15:04.597038  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:04.596957  438434 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 21:15:04.632879  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | trying to create private KVM network mk-custom-weave-20210813205926-393438 192.168.39.0/24...
	I0813 21:15:04.875225  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | private KVM network mk-custom-weave-20210813205926-393438 192.168.39.0/24 created
	I0813 21:15:04.875266  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:04.875178  438434 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:15:04.875287  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438 ...
	I0813 21:15:04.875314  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 21:15:04.875400  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 21:15:05.102764  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:05.102642  438434 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438/id_rsa...
	I0813 21:15:05.327798  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:05.327669  438434 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438/custom-weave-20210813205926-393438.rawdisk...
	I0813 21:15:05.327839  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Writing magic tar header
	I0813 21:15:05.327853  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Writing SSH key tar header
	I0813 21:15:05.328060  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:05.327877  438434 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438 ...
	I0813 21:15:05.328132  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438 (perms=drwx------)
	I0813 21:15:05.328165  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438
	I0813 21:15:05.328216  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 21:15:05.328254  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 21:15:05.328273  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 21:15:05.328294  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:15:05.328312  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337
	I0813 21:15:05.328341  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 21:15:05.328354  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 21:15:05.328366  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 21:15:05.328380  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Creating domain...
	I0813 21:15:05.328416  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 21:15:05.328442  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins
	I0813 21:15:05.328477  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Checking permissions on dir: /home
	I0813 21:15:05.328501  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Skipping /home - not owner
	I0813 21:15:05.356813  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:c8:99:e3 in network default
	I0813 21:15:05.357477  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Ensuring networks are active...
	I0813 21:15:05.357506  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:05.360338  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Ensuring network default is active
	I0813 21:15:05.360676  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Ensuring network mk-custom-weave-20210813205926-393438 is active
	I0813 21:15:05.361381  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Getting domain xml...
	I0813 21:15:05.363724  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Creating domain...
	I0813 21:15:05.830175  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Waiting to get IP...
	I0813 21:15:05.831429  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:05.832198  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:05.832234  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:05.832113  438434 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 21:15:06.096685  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:06.097289  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:06.097335  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:06.097221  438434 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 21:15:06.480090  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:06.480731  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:06.480773  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:06.480656  438434 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 21:15:06.905434  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:06.906087  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:06.906120  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:06.906033  438434 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 21:15:07.380314  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:07.380900  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:07.381030  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:07.380953  438434 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 21:15:07.969584  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:07.970137  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:07.970169  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:07.970077  438434 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 21:15:08.805562  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:08.806067  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:08.806098  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:08.806027  438434 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 21:15:11.122693  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:13.639383  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:09.341165  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:09.841270  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:10.341951  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:10.841976  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:11.341700  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:11.841671  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:12.341247  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:12.841554  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:13.342213  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:13.841300  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:09.553967  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:09.554439  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:09.554469  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:09.554383  438434 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 21:15:10.543063  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:10.543613  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:10.543648  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:10.543565  438434 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 21:15:11.734660  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:11.735201  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:11.735240  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:11.735129  438434 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 21:15:13.413723  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:13.414229  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:13.414273  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:13.414176  438434 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 21:15:17.504849  437853 out.go:204]   - Generating certificates and keys ...
	I0813 21:15:17.508150  437853 out.go:204]   - Booting up control plane ...
	I0813 21:15:17.510818  437853 out.go:204]   - Configuring RBAC rules ...
	I0813 21:15:17.513293  437853 cni.go:93] Creating CNI manager for "calico"
	I0813 21:15:16.121652  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:18.128556  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:14.341740  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:14.841920  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:15.341626  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:15.841271  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:16.342141  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:16.841778  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:17.341881  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:17.841269  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:18.341952  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:18.841407  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:15.762378  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:15.762908  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:15.762935  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:15.762860  438434 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 21:15:19.131824  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:19.132457  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find current IP address of domain custom-weave-20210813205926-393438 in network mk-custom-weave-20210813205926-393438
	I0813 21:15:19.132490  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | I0813 21:15:19.132367  438434 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 21:15:17.514824  437853 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0813 21:15:17.515037  437853 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 21:15:17.515057  437853 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0813 21:15:17.545638  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 21:15:20.359365  437853 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.813682082s)
	I0813 21:15:20.359430  437853 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:15:20.359506  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:20.359532  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=852050cf77fe767e86d5a194bb91c06c4dc6c13c minikube.k8s.io/name=calico-20210813205926-393438 minikube.k8s.io/updated_at=2021_08_13T21_15_20_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:20.427254  437853 ops.go:34] apiserver oom_adj: -16
	I0813 21:15:20.662599  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:21.263964  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:21.763364  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:20.624005  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:23.125325  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:19.341560  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:19.842010  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:20.341183  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:20.841618  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:21.341599  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:21.842028  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:22.341224  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:22.841597  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:23.342209  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:23.841526  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:22.251739  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:22.252317  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Found IP for machine: 192.168.39.226
	I0813 21:15:22.252353  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has current primary IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:22.252367  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Reserving static IP address...
	I0813 21:15:22.252762  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | unable to find host DHCP lease matching {name: "custom-weave-20210813205926-393438", mac: "52:54:00:0d:81:dc", ip: "192.168.39.226"} in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.059387  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Getting to WaitForSSH function...
	I0813 21:15:23.059422  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Reserved static IP address: 192.168.39.226
	I0813 21:15:23.059438  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Waiting for SSH to be available...
	I0813 21:15:23.065771  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.066243  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:23.066277  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.066749  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Using SSH client type: external
	I0813 21:15:23.066783  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438/id_rsa (-rw-------)
	I0813 21:15:23.066835  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 21:15:23.066850  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | About to run SSH command:
	I0813 21:15:23.066867  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | exit 0
	I0813 21:15:23.206799  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | SSH cmd err, output: <nil>: 
	I0813 21:15:23.207358  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) KVM machine creation complete!
	I0813 21:15:23.207413  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetConfigRaw
	I0813 21:15:23.208146  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .DriverName
	I0813 21:15:23.208355  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .DriverName
	I0813 21:15:23.208591  438411 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 21:15:23.208613  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetState
	I0813 21:15:23.211797  438411 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 21:15:23.211816  438411 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 21:15:23.211826  438411 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 21:15:23.211836  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:15:23.217181  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.217608  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:23.217640  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.217743  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHPort
	I0813 21:15:23.217934  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:23.218146  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:23.218319  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:15:23.218500  438411 main.go:130] libmachine: Using SSH client type: native
	I0813 21:15:23.218701  438411 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0813 21:15:23.218717  438411 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 21:15:23.354490  438411 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:15:23.354519  438411 main.go:130] libmachine: Detecting the provisioner...
	I0813 21:15:23.354530  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:15:23.360931  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.361309  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:23.361343  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.361582  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHPort
	I0813 21:15:23.361778  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:23.361923  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:23.362066  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:15:23.362232  438411 main.go:130] libmachine: Using SSH client type: native
	I0813 21:15:23.362375  438411 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0813 21:15:23.362386  438411 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 21:15:23.496344  438411 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 21:15:23.496449  438411 main.go:130] libmachine: found compatible host: buildroot
	I0813 21:15:23.496465  438411 main.go:130] libmachine: Provisioning with buildroot...
	I0813 21:15:23.496484  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetMachineName
	I0813 21:15:23.496787  438411 buildroot.go:166] provisioning hostname "custom-weave-20210813205926-393438"
	I0813 21:15:23.496822  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetMachineName
	I0813 21:15:23.497050  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:15:23.503396  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.504002  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:23.504032  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.504183  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHPort
	I0813 21:15:23.504377  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:23.504534  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:23.504690  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:15:23.504889  438411 main.go:130] libmachine: Using SSH client type: native
	I0813 21:15:23.505089  438411 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0813 21:15:23.505111  438411 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20210813205926-393438 && echo "custom-weave-20210813205926-393438" | sudo tee /etc/hostname
	I0813 21:15:23.644594  438411 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20210813205926-393438
	
	I0813 21:15:23.644626  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:15:23.650362  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.650804  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:23.650836  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.651011  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHPort
	I0813 21:15:23.651194  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:23.651348  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:23.651482  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:15:23.651691  438411 main.go:130] libmachine: Using SSH client type: native
	I0813 21:15:23.651890  438411 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0813 21:15:23.651915  438411 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20210813205926-393438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20210813205926-393438/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20210813205926-393438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 21:15:23.784029  438411 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 21:15:23.784071  438411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube}
	I0813 21:15:23.784131  438411 buildroot.go:174] setting up certificates
	I0813 21:15:23.784152  438411 provision.go:83] configureAuth start
	I0813 21:15:23.784171  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetMachineName
	I0813 21:15:23.784529  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetIP
	I0813 21:15:23.791324  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.791767  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:23.791836  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.791988  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:15:23.797061  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.797475  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:23.797504  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.797609  438411 provision.go:138] copyHostCerts
	I0813 21:15:23.797671  438411 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem, removing ...
	I0813 21:15:23.797681  438411 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem
	I0813 21:15:23.797735  438411 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.pem (1078 bytes)
	I0813 21:15:23.797836  438411 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem, removing ...
	I0813 21:15:23.797847  438411 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem
	I0813 21:15:23.797879  438411 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cert.pem (1123 bytes)
	I0813 21:15:23.797956  438411 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem, removing ...
	I0813 21:15:23.797966  438411 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem
	I0813 21:15:23.797997  438411 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/key.pem (1675 bytes)
	I0813 21:15:23.798067  438411 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20210813205926-393438 san=[192.168.39.226 192.168.39.226 localhost 127.0.0.1 minikube custom-weave-20210813205926-393438]
	I0813 21:15:23.936106  438411 provision.go:172] copyRemoteCerts
	I0813 21:15:23.936166  438411 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 21:15:23.936195  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:15:23.941970  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.942362  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:23.942415  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:23.942618  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHPort
	I0813 21:15:23.942868  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:23.943033  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:15:23.943159  438411 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438/id_rsa Username:docker}
	I0813 21:15:24.038590  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 21:15:24.055710  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0813 21:15:24.072762  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 21:15:24.089764  438411 provision.go:86] duration metric: configureAuth took 305.585122ms
	I0813 21:15:24.089787  438411 buildroot.go:189] setting minikube options for container-runtime
	I0813 21:15:24.089959  438411 config.go:177] Loaded profile config "custom-weave-20210813205926-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:15:24.089986  438411 main.go:130] libmachine: Checking connection to Docker...
	I0813 21:15:24.090004  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetURL
	I0813 21:15:24.092983  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | Using libvirt version 3000000
	I0813 21:15:24.098242  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.098610  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:24.098646  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.098801  438411 main.go:130] libmachine: Docker is up and running!
	I0813 21:15:24.098821  438411 main.go:130] libmachine: Reticulating splines...
	I0813 21:15:24.098829  438411 client.go:171] LocalClient.Create took 19.507802972s
	I0813 21:15:24.098846  438411 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20210813205926-393438" took 19.507861121s
	I0813 21:15:24.098861  438411 start.go:267] post-start starting for "custom-weave-20210813205926-393438" (driver="kvm2")
	I0813 21:15:24.098869  438411 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 21:15:24.098883  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .DriverName
	I0813 21:15:24.099097  438411 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 21:15:24.099123  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:15:24.103850  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.104164  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:24.104197  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.104312  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHPort
	I0813 21:15:24.104468  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:24.104613  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:15:24.104749  438411 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438/id_rsa Username:docker}
	I0813 21:15:24.200233  438411 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 21:15:24.205351  438411 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 21:15:24.205380  438411 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/addons for local assets ...
	I0813 21:15:24.205436  438411 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files for local assets ...
	I0813 21:15:24.205559  438411 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem -> 3934382.pem in /etc/ssl/certs
	I0813 21:15:24.205678  438411 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 21:15:24.215294  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:15:24.233036  438411 start.go:270] post-start completed in 134.160484ms
	I0813 21:15:24.233087  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetConfigRaw
	I0813 21:15:24.233605  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetIP
	I0813 21:15:24.239106  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.239453  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:24.239479  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.239724  438411 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/config.json ...
	I0813 21:15:24.239875  438411 start.go:129] duration metric: createHost completed in 19.662524788s
	I0813 21:15:24.239888  438411 start.go:80] releasing machines lock for "custom-weave-20210813205926-393438", held for 19.662611594s
	I0813 21:15:24.239920  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .DriverName
	I0813 21:15:24.240099  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetIP
	I0813 21:15:24.244447  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.244747  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:24.244771  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.244918  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .DriverName
	I0813 21:15:24.245073  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .DriverName
	I0813 21:15:24.245640  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .DriverName
	I0813 21:15:24.245891  438411 ssh_runner.go:149] Run: systemctl --version
	I0813 21:15:24.245917  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:15:24.245970  438411 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 21:15:24.246030  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:15:24.252148  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.252476  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:24.252501  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.252604  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHPort
	I0813 21:15:24.252757  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:24.252922  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:15:24.253070  438411 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438/id_rsa Username:docker}
	I0813 21:15:24.253323  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.253670  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:24.253703  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:24.253885  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHPort
	I0813 21:15:24.254050  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:24.254225  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:15:24.254364  438411 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/custom-weave-20210813205926-393438/id_rsa Username:docker}
	I0813 21:15:24.379340  438411 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:15:24.379462  438411 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:15:22.263617  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:22.764174  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:23.263398  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:23.763512  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:24.263767  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:24.763813  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:25.263735  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:25.764328  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:26.263593  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:26.763319  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:25.620520  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:27.621202  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:24.341239  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:24.841300  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:25.341243  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:25.841171  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:26.342190  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:26.841913  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:27.341430  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:27.841223  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:28.341600  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:28.841174  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:28.385413  438411 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.00592051s)
	I0813 21:15:28.385554  438411 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 21:15:28.385649  438411 ssh_runner.go:149] Run: which lz4
	I0813 21:15:28.389963  438411 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 21:15:28.394524  438411 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 21:15:28.394553  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0813 21:15:27.264244  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:27.764281  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:28.264111  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:28.763289  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:29.264222  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:29.763488  437853 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 21:15:30.171662  437853 kubeadm.go:985] duration metric: took 9.812219106s to wait for elevateKubeSystemPrivileges.
	I0813 21:15:30.171697  437853 kubeadm.go:392] StartCluster complete in 33.991945685s
	I0813 21:15:30.171728  437853 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:15:30.171858  437853 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:15:30.173948  437853 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0813 21:15:30.346957  437853 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0813 21:15:31.353251  437853 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20210813205926-393438" rescaled to 1
	I0813 21:15:31.353314  437853 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.72.235 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 21:15:31.354920  437853 out.go:177] * Verifying Kubernetes components...
	I0813 21:15:31.354991  437853 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:15:31.353491  437853 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:15:31.353513  437853 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 21:15:31.353703  437853 config.go:177] Loaded profile config "calico-20210813205926-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:15:31.355084  437853 addons.go:59] Setting storage-provisioner=true in profile "calico-20210813205926-393438"
	I0813 21:15:31.355102  437853 addons.go:135] Setting addon storage-provisioner=true in "calico-20210813205926-393438"
	W0813 21:15:31.355110  437853 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:15:31.355129  437853 addons.go:59] Setting default-storageclass=true in profile "calico-20210813205926-393438"
	I0813 21:15:31.355145  437853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20210813205926-393438"
	I0813 21:15:31.355181  437853 host.go:66] Checking if "calico-20210813205926-393438" exists ...
	I0813 21:15:31.355633  437853 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:31.355655  437853 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:31.355670  437853 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:31.355687  437853 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:31.382250  437853 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0813 21:15:31.382818  437853 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:31.383411  437853 main.go:130] libmachine: Using API Version  1
	I0813 21:15:31.383433  437853 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:31.383510  437853 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46621
	I0813 21:15:31.383972  437853 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:31.384454  437853 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:31.384551  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .GetState
	I0813 21:15:31.385829  437853 main.go:130] libmachine: Using API Version  1
	I0813 21:15:31.385850  437853 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:31.386215  437853 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:31.387115  437853 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:31.387165  437853 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:31.399622  437853 addons.go:135] Setting addon default-storageclass=true in "calico-20210813205926-393438"
	W0813 21:15:31.399645  437853 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:15:31.399674  437853 host.go:66] Checking if "calico-20210813205926-393438" exists ...
	I0813 21:15:31.400103  437853 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:31.400145  437853 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:31.403383  437853 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46735
	I0813 21:15:31.403825  437853 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:31.404369  437853 main.go:130] libmachine: Using API Version  1
	I0813 21:15:31.404395  437853 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:31.404784  437853 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:31.404941  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .GetState
	I0813 21:15:31.408933  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .DriverName
	I0813 21:15:31.411357  437853 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:15:31.411497  437853 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:15:31.411508  437853 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:15:31.411535  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:15:31.417916  437853 main.go:130] libmachine: (calico-20210813205926-393438) DBG | domain calico-20210813205926-393438 has defined MAC address 52:54:00:0e:02:de in network mk-calico-20210813205926-393438
	I0813 21:15:31.418590  437853 main.go:130] libmachine: (calico-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:02:de", ip: ""} in network mk-calico-20210813205926-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:14:29 +0000 UTC Type:0 Mac:52:54:00:0e:02:de Iaid: IPaddr:192.168.72.235 Prefix:24 Hostname:calico-20210813205926-393438 Clientid:01:52:54:00:0e:02:de}
	I0813 21:15:31.418618  437853 main.go:130] libmachine: (calico-20210813205926-393438) DBG | domain calico-20210813205926-393438 has defined IP address 192.168.72.235 and MAC address 52:54:00:0e:02:de in network mk-calico-20210813205926-393438
	I0813 21:15:31.418784  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .GetSSHPort
	I0813 21:15:31.418919  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:31.419039  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:15:31.419181  437853 sshutil.go:53] new ssh client: &{IP:192.168.72.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/calico-20210813205926-393438/id_rsa Username:docker}
	I0813 21:15:31.430084  437853 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0813 21:15:31.430654  437853 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:31.431293  437853 main.go:130] libmachine: Using API Version  1
	I0813 21:15:31.431312  437853 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:31.431766  437853 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:31.432254  437853 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:31.432299  437853 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:31.446774  437853 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45131
	I0813 21:15:31.447518  437853 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:31.448486  437853 main.go:130] libmachine: Using API Version  1
	I0813 21:15:31.448514  437853 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:31.448911  437853 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:31.449130  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .GetState
	I0813 21:15:31.452721  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .DriverName
	I0813 21:15:31.453018  437853 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:15:31.453033  437853 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:15:31.453056  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .GetSSHHostname
	I0813 21:15:31.459861  437853 main.go:130] libmachine: (calico-20210813205926-393438) DBG | domain calico-20210813205926-393438 has defined MAC address 52:54:00:0e:02:de in network mk-calico-20210813205926-393438
	I0813 21:15:31.460336  437853 main.go:130] libmachine: (calico-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:02:de", ip: ""} in network mk-calico-20210813205926-393438: {Iface:virbr4 ExpiryTime:2021-08-13 22:14:29 +0000 UTC Type:0 Mac:52:54:00:0e:02:de Iaid: IPaddr:192.168.72.235 Prefix:24 Hostname:calico-20210813205926-393438 Clientid:01:52:54:00:0e:02:de}
	I0813 21:15:31.460374  437853 main.go:130] libmachine: (calico-20210813205926-393438) DBG | domain calico-20210813205926-393438 has defined IP address 192.168.72.235 and MAC address 52:54:00:0e:02:de in network mk-calico-20210813205926-393438
	I0813 21:15:31.460413  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .GetSSHPort
	I0813 21:15:31.460603  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .GetSSHKeyPath
	I0813 21:15:31.460821  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .GetSSHUsername
	I0813 21:15:31.460998  437853 sshutil.go:53] new ssh client: &{IP:192.168.72.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/calico-20210813205926-393438/id_rsa Username:docker}
	I0813 21:15:31.789242  437853 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:15:31.821919  437853 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:15:29.631850  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:31.634481  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:29.341863  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:29.842120  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:29.864311  437512 api_server.go:70] duration metric: took 50.535211034s to wait for apiserver process to appear ...
	I0813 21:15:29.864349  437512 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:15:29.864362  437512 api_server.go:239] Checking apiserver healthz at https://192.168.61.119:8443/healthz ...
	I0813 21:15:32.274007  438411 containerd.go:546] Took 3.884074 seconds to copy over tarball
	I0813 21:15:32.274086  438411 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 21:15:32.171190  437853 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 21:15:32.173794  437853 node_ready.go:35] waiting up to 5m0s for node "calico-20210813205926-393438" to be "Ready" ...
	I0813 21:15:32.181446  437853 node_ready.go:49] node "calico-20210813205926-393438" has status "Ready":"True"
	I0813 21:15:32.181469  437853 node_ready.go:38] duration metric: took 7.63634ms waiting for node "calico-20210813205926-393438" to be "Ready" ...
	I0813 21:15:32.181481  437853 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:15:32.207975  437853 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-58497c65d5-bjmlz" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:34.235453  437853 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-bjmlz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:36.122824  437853 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.300858676s)
	I0813 21:15:36.122887  437853 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:36.122903  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .Close
	I0813 21:15:36.123003  437853 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.951782286s)
	I0813 21:15:36.123018  437853 start.go:728] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS
	I0813 21:15:36.123335  437853 main.go:130] libmachine: (calico-20210813205926-393438) DBG | Closing plugin on server side
	I0813 21:15:36.123399  437853 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:36.123415  437853 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:36.123430  437853 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:36.123443  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .Close
	I0813 21:15:36.123784  437853 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:36.123790  437853 main.go:130] libmachine: (calico-20210813205926-393438) DBG | Closing plugin on server side
	I0813 21:15:36.123800  437853 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:36.123817  437853 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:36.123826  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .Close
	I0813 21:15:36.125393  437853 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:36.125406  437853 main.go:130] libmachine: (calico-20210813205926-393438) DBG | Closing plugin on server side
	I0813 21:15:36.125411  437853 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:36.126913  437853 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.337609937s)
	I0813 21:15:36.126974  437853 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:36.126996  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .Close
	I0813 21:15:36.127316  437853 main.go:130] libmachine: (calico-20210813205926-393438) DBG | Closing plugin on server side
	I0813 21:15:36.127336  437853 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:36.127344  437853 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:36.127358  437853 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:36.127367  437853 main.go:130] libmachine: (calico-20210813205926-393438) Calling .Close
	I0813 21:15:36.128760  437853 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:36.128780  437853 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:36.128810  437853 main.go:130] libmachine: (calico-20210813205926-393438) DBG | Closing plugin on server side
	I0813 21:15:36.130816  437853 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0813 21:15:36.130841  437853 addons.go:344] enableAddons completed in 4.777340707s
	I0813 21:15:36.244937  437853 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-bjmlz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:34.123871  436805 pod_ready.go:102] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:34.622434  436805 pod_ready.go:92] pod "cilium-jjq8z" in "kube-system" namespace has status "Ready":"True"
	I0813 21:15:34.622464  436805 pod_ready.go:81] duration metric: took 1m12.535900435s waiting for pod "cilium-jjq8z" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:34.622479  436805 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-99d899fb5-p5879" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:34.632364  436805 pod_ready.go:92] pod "cilium-operator-99d899fb5-p5879" in "kube-system" namespace has status "Ready":"True"
	I0813 21:15:34.632388  436805 pod_ready.go:81] duration metric: took 9.900695ms waiting for pod "cilium-operator-99d899fb5-p5879" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:34.632401  436805 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-57fjf" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:34.635572  436805 pod_ready.go:97] error getting pod "coredns-558bd4d5db-57fjf" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-57fjf" not found
	I0813 21:15:34.635596  436805 pod_ready.go:81] duration metric: took 3.186383ms waiting for pod "coredns-558bd4d5db-57fjf" in "kube-system" namespace to be "Ready" ...
	E0813 21:15:34.635608  436805 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-57fjf" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-57fjf" not found
	I0813 21:15:34.635616  436805 pod_ready.go:78] waiting up to 5m0s for pod "etcd-cilium-20210813205926-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:34.644729  436805 pod_ready.go:92] pod "etcd-cilium-20210813205926-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:15:34.644750  436805 pod_ready.go:81] duration metric: took 9.125486ms waiting for pod "etcd-cilium-20210813205926-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:34.644762  436805 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-cilium-20210813205926-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:34.656936  436805 pod_ready.go:92] pod "kube-apiserver-cilium-20210813205926-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:15:34.656964  436805 pod_ready.go:81] duration metric: took 12.192658ms waiting for pod "kube-apiserver-cilium-20210813205926-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:34.656981  436805 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-cilium-20210813205926-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:34.820146  436805 pod_ready.go:92] pod "kube-controller-manager-cilium-20210813205926-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:15:34.820173  436805 pod_ready.go:81] duration metric: took 163.17989ms waiting for pod "kube-controller-manager-cilium-20210813205926-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:34.820187  436805 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-ns474" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:35.218733  436805 pod_ready.go:92] pod "kube-proxy-ns474" in "kube-system" namespace has status "Ready":"True"
	I0813 21:15:35.218815  436805 pod_ready.go:81] duration metric: took 398.616336ms waiting for pod "kube-proxy-ns474" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:35.218848  436805 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-cilium-20210813205926-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:35.623977  436805 pod_ready.go:92] pod "kube-scheduler-cilium-20210813205926-393438" in "kube-system" namespace has status "Ready":"True"
	I0813 21:15:35.624002  436805 pod_ready.go:81] duration metric: took 405.129098ms waiting for pod "kube-scheduler-cilium-20210813205926-393438" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:35.624011  436805 pod_ready.go:38] duration metric: took 1m13.55884333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:15:35.624030  436805 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:15:35.624050  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0813 21:15:35.624102  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 21:15:35.692014  436805 cri.go:76] found id: "aba41f9a58effab52b2606e6402a8d6e51b6ac55b1f0c6626f460d95ca393451"
	I0813 21:15:35.692064  436805 cri.go:76] found id: ""
	I0813 21:15:35.692077  436805 logs.go:270] 1 containers: [aba41f9a58effab52b2606e6402a8d6e51b6ac55b1f0c6626f460d95ca393451]
	I0813 21:15:35.692145  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:35.707793  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0813 21:15:35.707953  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 21:15:35.775049  436805 cri.go:76] found id: "70bbde4d3ea6895293027b4a5835f0b1a263977af6a140c658fa457f80740679"
	I0813 21:15:35.775081  436805 cri.go:76] found id: ""
	I0813 21:15:35.775090  436805 logs.go:270] 1 containers: [70bbde4d3ea6895293027b4a5835f0b1a263977af6a140c658fa457f80740679]
	I0813 21:15:35.775149  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:35.786523  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0813 21:15:35.786602  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 21:15:35.855772  436805 cri.go:76] found id: "549abc3852bfc93aa6a82d79e6783c5fe3e46d1f8d36c3b4a223d8d63bea0892"
	I0813 21:15:35.855802  436805 cri.go:76] found id: ""
	I0813 21:15:35.855810  436805 logs.go:270] 1 containers: [549abc3852bfc93aa6a82d79e6783c5fe3e46d1f8d36c3b4a223d8d63bea0892]
	I0813 21:15:35.855876  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:35.862008  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0813 21:15:35.862082  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 21:15:35.907737  436805 cri.go:76] found id: "8dbfbd8e3c08a37afac3138f8eff6cf4e07537077bff9a61055ab9dfa6983b86"
	I0813 21:15:35.907768  436805 cri.go:76] found id: ""
	I0813 21:15:35.907776  436805 logs.go:270] 1 containers: [8dbfbd8e3c08a37afac3138f8eff6cf4e07537077bff9a61055ab9dfa6983b86]
	I0813 21:15:35.907839  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:35.914924  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0813 21:15:35.915043  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 21:15:35.968695  436805 cri.go:76] found id: "b04a00b1021390880d267787cc3f78ce64cd295d67e7fe4fd12b6a70a6c6c050"
	I0813 21:15:35.968728  436805 cri.go:76] found id: ""
	I0813 21:15:35.968737  436805 logs.go:270] 1 containers: [b04a00b1021390880d267787cc3f78ce64cd295d67e7fe4fd12b6a70a6c6c050]
	I0813 21:15:35.968800  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:35.975574  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 21:15:35.975647  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 21:15:36.043563  436805 cri.go:76] found id: ""
	I0813 21:15:36.043597  436805 logs.go:270] 0 containers: []
	W0813 21:15:36.043607  436805 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 21:15:36.043618  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0813 21:15:36.043686  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 21:15:36.094993  436805 cri.go:76] found id: "3137521a265fd9115928a8b4212c6ed191c1fcf90cdd62eff71b62089af61aee"
	I0813 21:15:36.095023  436805 cri.go:76] found id: "4c64293d41db04d38faaad1924e63d14cc64d679f7c732a3cab03d2876358181"
	I0813 21:15:36.095032  436805 cri.go:76] found id: ""
	I0813 21:15:36.095039  436805 logs.go:270] 2 containers: [3137521a265fd9115928a8b4212c6ed191c1fcf90cdd62eff71b62089af61aee 4c64293d41db04d38faaad1924e63d14cc64d679f7c732a3cab03d2876358181]
	I0813 21:15:36.095102  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:36.102578  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:36.110920  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 21:15:36.110990  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 21:15:36.175043  436805 cri.go:76] found id: "e046b6d8979b83684d1dee89eada5f0e3c09731a2a55641c5667f6614ece4df8"
	I0813 21:15:36.175122  436805 cri.go:76] found id: ""
	I0813 21:15:36.175132  436805 logs.go:270] 1 containers: [e046b6d8979b83684d1dee89eada5f0e3c09731a2a55641c5667f6614ece4df8]
	I0813 21:15:36.175188  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:36.181680  436805 logs.go:123] Gathering logs for dmesg ...
	I0813 21:15:36.181713  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 21:15:36.199921  436805 logs.go:123] Gathering logs for coredns [549abc3852bfc93aa6a82d79e6783c5fe3e46d1f8d36c3b4a223d8d63bea0892] ...
	I0813 21:15:36.199960  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 549abc3852bfc93aa6a82d79e6783c5fe3e46d1f8d36c3b4a223d8d63bea0892"
	I0813 21:15:36.247236  436805 logs.go:123] Gathering logs for kube-proxy [b04a00b1021390880d267787cc3f78ce64cd295d67e7fe4fd12b6a70a6c6c050] ...
	I0813 21:15:36.247267  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b04a00b1021390880d267787cc3f78ce64cd295d67e7fe4fd12b6a70a6c6c050"
	I0813 21:15:36.297843  436805 logs.go:123] Gathering logs for storage-provisioner [3137521a265fd9115928a8b4212c6ed191c1fcf90cdd62eff71b62089af61aee] ...
	I0813 21:15:36.297885  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3137521a265fd9115928a8b4212c6ed191c1fcf90cdd62eff71b62089af61aee"
	I0813 21:15:36.347025  436805 logs.go:123] Gathering logs for containerd ...
	I0813 21:15:36.347068  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0813 21:15:36.408010  436805 logs.go:123] Gathering logs for container status ...
	I0813 21:15:36.408058  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 21:15:36.475419  436805 logs.go:123] Gathering logs for kube-controller-manager [e046b6d8979b83684d1dee89eada5f0e3c09731a2a55641c5667f6614ece4df8] ...
	I0813 21:15:36.475465  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 e046b6d8979b83684d1dee89eada5f0e3c09731a2a55641c5667f6614ece4df8"
	I0813 21:15:36.565388  436805 logs.go:123] Gathering logs for kubelet ...
	I0813 21:15:36.565436  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0813 21:15:36.641668  436805 logs.go:138] Found kubelet problem: Aug 13 21:14:21 cilium-20210813205926-393438 kubelet[2812]: E0813 21:14:21.514897    2812 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:cilium-20210813205926-393438" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'cilium-20210813205926-393438' and this object
	I0813 21:15:36.653954  436805 logs.go:123] Gathering logs for describe nodes ...
	I0813 21:15:36.653993  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 21:15:37.149603  436805 logs.go:123] Gathering logs for kube-apiserver [aba41f9a58effab52b2606e6402a8d6e51b6ac55b1f0c6626f460d95ca393451] ...
	I0813 21:15:37.149702  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 aba41f9a58effab52b2606e6402a8d6e51b6ac55b1f0c6626f460d95ca393451"
	I0813 21:15:37.239711  436805 logs.go:123] Gathering logs for etcd [70bbde4d3ea6895293027b4a5835f0b1a263977af6a140c658fa457f80740679] ...
	I0813 21:15:37.239752  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 70bbde4d3ea6895293027b4a5835f0b1a263977af6a140c658fa457f80740679"
	I0813 21:15:37.311921  436805 logs.go:123] Gathering logs for kube-scheduler [8dbfbd8e3c08a37afac3138f8eff6cf4e07537077bff9a61055ab9dfa6983b86] ...
	I0813 21:15:37.311976  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8dbfbd8e3c08a37afac3138f8eff6cf4e07537077bff9a61055ab9dfa6983b86"
	I0813 21:15:37.384215  436805 logs.go:123] Gathering logs for storage-provisioner [4c64293d41db04d38faaad1924e63d14cc64d679f7c732a3cab03d2876358181] ...
	I0813 21:15:37.384260  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 4c64293d41db04d38faaad1924e63d14cc64d679f7c732a3cab03d2876358181"
	I0813 21:15:37.436993  436805 out.go:311] Setting ErrFile to fd 2...
	I0813 21:15:37.437023  436805 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0813 21:15:37.437152  436805 out.go:242] X Problems detected in kubelet:
	W0813 21:15:37.437224  436805 out.go:242]   Aug 13 21:14:21 cilium-20210813205926-393438 kubelet[2812]: E0813 21:14:21.514897    2812 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:cilium-20210813205926-393438" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'cilium-20210813205926-393438' and this object
	I0813 21:15:37.437263  436805 out.go:311] Setting ErrFile to fd 2...
	I0813 21:15:37.437287  436805 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:15:34.864654  437512 api_server.go:255] stopped: https://192.168.61.119:8443/healthz: Get "https://192.168.61.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 21:15:35.365721  437512 api_server.go:239] Checking apiserver healthz at https://192.168.61.119:8443/healthz ...
	I0813 21:15:35.450182  437512 api_server.go:265] https://192.168.61.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 21:15:35.450219  437512 api_server.go:101] status: https://192.168.61.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 21:15:35.865415  437512 api_server.go:239] Checking apiserver healthz at https://192.168.61.119:8443/healthz ...
	I0813 21:15:35.893443  437512 api_server.go:265] https://192.168.61.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:15:35.893482  437512 api_server.go:101] status: https://192.168.61.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:15:36.365073  437512 api_server.go:239] Checking apiserver healthz at https://192.168.61.119:8443/healthz ...
	I0813 21:15:36.394587  437512 api_server.go:265] https://192.168.61.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:15:36.394623  437512 api_server.go:101] status: https://192.168.61.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:15:36.864880  437512 api_server.go:239] Checking apiserver healthz at https://192.168.61.119:8443/healthz ...
	I0813 21:15:36.911994  437512 api_server.go:265] https://192.168.61.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:15:36.912031  437512 api_server.go:101] status: https://192.168.61.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:15:37.365611  437512 api_server.go:239] Checking apiserver healthz at https://192.168.61.119:8443/healthz ...
	I0813 21:15:39.378904  437512 api_server.go:265] https://192.168.61.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:15:44.523819  437512 api_server.go:101] status: https://192.168.61.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:15:44.865668  437512 api_server.go:239] Checking apiserver healthz at https://192.168.61.119:8443/healthz ...
	I0813 21:15:45.503936  437512 api_server.go:265] https://192.168.61.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 21:15:45.503973  437512 api_server.go:101] status: https://192.168.61.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 21:15:45.865469  437512 api_server.go:239] Checking apiserver healthz at https://192.168.61.119:8443/healthz ...
	I0813 21:15:45.871736  437512 api_server.go:265] https://192.168.61.119:8443/healthz returned 200:
	ok
	I0813 21:15:45.880545  437512 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:15:45.880571  437512 api_server.go:129] duration metric: took 16.016215007s to wait for apiserver health ...
	I0813 21:15:45.880590  437512 cni.go:93] Creating CNI manager for ""
	I0813 21:15:45.880604  437512 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 21:15:45.882044  437512 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 21:15:45.882111  437512 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 21:15:45.892291  437512 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 21:15:45.917512  437512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:15:45.935817  437512 system_pods.go:59] 9 kube-system pods found
	I0813 21:15:45.935851  437512 system_pods.go:61] "coredns-78fcd69978-p4xpj" [f526b6f0-a3a3-4d19-a0eb-0ed3226e7877] Running
	I0813 21:15:45.935863  437512 system_pods.go:61] "coredns-78fcd69978-xp594" [4f772c12-7fdf-4ccd-b548-9089857210e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:15:45.935870  437512 system_pods.go:61] "etcd-newest-cni-20210813211202-393438" [31e482fb-2505-43ad-92cd-9542acf345e0] Running
	I0813 21:15:45.935877  437512 system_pods.go:61] "kube-apiserver-newest-cni-20210813211202-393438" [ed0358e6-2910-4454-bf17-3cfdab38213d] Running
	I0813 21:15:45.935884  437512 system_pods.go:61] "kube-controller-manager-newest-cni-20210813211202-393438" [761818d0-ebbb-435c-bf3f-52e4636e90e8] Running
	I0813 21:15:45.935894  437512 system_pods.go:61] "kube-proxy-29642" [9e8095d0-fe55-430e-a196-2252122e423f] Running
	I0813 21:15:45.935900  437512 system_pods.go:61] "kube-scheduler-newest-cni-20210813211202-393438" [0ccb79ce-95b3-434a-a7a0-401ebe0b1efc] Running
	I0813 21:15:45.935912  437512 system_pods.go:61] "metrics-server-7c784ccb57-dk79w" [ce935cb7-2885-46f8-b983-c24a00c0664e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:15:45.935925  437512 system_pods.go:61] "storage-provisioner" [37826e43-1db5-4693-b6ba-bebfca4824d4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:15:45.935935  437512 system_pods.go:74] duration metric: took 18.40438ms to wait for pod list to return data ...
	I0813 21:15:45.935947  437512 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:15:45.943588  437512 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:15:45.943618  437512 node_conditions.go:123] node cpu capacity is 2
	I0813 21:15:45.943630  437512 node_conditions.go:105] duration metric: took 7.674859ms to run NodePressure ...
	I0813 21:15:45.943647  437512 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 21:15:46.381359  437512 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 21:15:46.393312  437512 ops.go:34] apiserver oom_adj: -16
	I0813 21:15:46.393344  437512 kubeadm.go:604] restartCluster took 1m11.576560843s
	I0813 21:15:46.393361  437512 kubeadm.go:392] StartCluster complete in 1m11.624939905s
	I0813 21:15:46.393388  437512 settings.go:142] acquiring lock: {Name:mk2e042a75d7d4722d2a29030eed8e43c687ad8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:15:46.393525  437512 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:15:46.395276  437512 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig: {Name:mk8b97e3aadd41f736bf0e5000577319169228de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:15:46.402982  437512 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210813211202-393438" rescaled to 1
	I0813 21:15:46.403058  437512 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0813 21:15:46.403107  437512 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0813 21:15:46.403213  437512 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210813211202-393438"
	I0813 21:15:46.403236  437512 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210813211202-393438"
	W0813 21:15:46.403244  437512 addons.go:147] addon storage-provisioner should already be in state true
	I0813 21:15:46.403274  437512 host.go:66] Checking if "newest-cni-20210813211202-393438" exists ...
	I0813 21:15:46.403319  437512 config.go:177] Loaded profile config "newest-cni-20210813211202-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:15:46.403378  437512 addons.go:59] Setting dashboard=true in profile "newest-cni-20210813211202-393438"
	I0813 21:15:46.403397  437512 addons.go:135] Setting addon dashboard=true in "newest-cni-20210813211202-393438"
	W0813 21:15:46.403404  437512 addons.go:147] addon dashboard should already be in state true
	I0813 21:15:46.403428  437512 host.go:66] Checking if "newest-cni-20210813211202-393438" exists ...
	I0813 21:15:46.403078  437512 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 21:15:46.404989  437512 out.go:177] * Verifying Kubernetes components...
	I0813 21:15:46.403834  437512 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210813211202-393438"
	I0813 21:15:46.405046  437512 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:15:46.405054  437512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210813211202-393438"
	I0813 21:15:46.403838  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:46.405145  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:46.403850  437512 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210813211202-393438"
	I0813 21:15:46.405359  437512 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210813211202-393438"
	W0813 21:15:46.405386  437512 addons.go:147] addon metrics-server should already be in state true
	I0813 21:15:46.403856  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:46.405445  437512 host.go:66] Checking if "newest-cni-20210813211202-393438" exists ...
	I0813 21:15:46.405473  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:46.405500  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:46.405513  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:46.405914  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:46.405981  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:46.420877  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44247
	I0813 21:15:46.421276  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:46.421941  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:15:46.421965  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:46.422558  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:46.422817  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetState
	I0813 21:15:46.430816  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34605
	I0813 21:15:46.431506  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:46.434120  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:15:46.434560  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:46.434985  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:46.435732  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:46.435774  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:46.441345  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37961
	I0813 21:15:46.441528  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38857
	I0813 21:15:46.441750  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:46.441835  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:46.442253  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:15:46.442272  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:46.442311  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:15:46.442374  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:46.442750  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:46.443319  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:46.443366  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:46.443768  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:46.444320  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:46.444381  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:46.450780  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0813 21:15:46.451719  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:46.451985  437512 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210813211202-393438"
	W0813 21:15:46.452009  437512 addons.go:147] addon default-storageclass should already be in state true
	I0813 21:15:46.452038  437512 host.go:66] Checking if "newest-cni-20210813211202-393438" exists ...
	I0813 21:15:46.452266  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:15:46.452289  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:46.452427  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:46.452471  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:46.452663  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:46.453121  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetState
	I0813 21:15:46.457714  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:15:46.459968  437512 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 21:15:46.460084  437512 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:15:46.460095  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 21:15:46.460116  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:15:46.461182  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0813 21:15:46.461667  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:46.462262  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:15:46.462286  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:46.462795  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:46.463141  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetState
	I0813 21:15:46.466245  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0813 21:15:46.467032  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38119
	I0813 21:15:46.467155  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:15:46.467570  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:46.467581  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:46.469217  437512 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0813 21:15:46.468103  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:15:46.469309  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:46.468304  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:15:46.468797  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:15:46.469590  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:15:46.469760  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:46.470692  437512 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 21:15:46.470737  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:14:05 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:15:46.470744  437512 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 21:15:46.470757  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 21:15:46.470776  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:15:46.470777  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:15:46.470787  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:46.470904  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:15:46.471035  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetState
	I0813 21:15:46.471089  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:15:46.471241  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:46.471602  437512 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa Username:docker}
	I0813 21:15:46.471842  437512 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:15:46.471931  437512 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:15:46.475997  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:15:40.672581  438411 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (8.398455136s)
	I0813 21:15:44.523818  438411 containerd.go:553] Took 12.249765 seconds t extract the tarball
	I0813 21:15:44.523850  438411 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 21:15:44.588875  438411 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:15:44.726999  438411 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:15:44.777104  438411 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 21:15:44.820055  438411 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 21:15:44.834494  438411 docker.go:153] disabling docker service ...
	I0813 21:15:44.834558  438411 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 21:15:44.847155  438411 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 21:15:44.857364  438411 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 21:15:45.038557  438411 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 21:15:45.175336  438411 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 21:15:45.186622  438411 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 21:15:45.200660  438411 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 21:15:45.215633  438411 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 21:15:45.222029  438411 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 21:15:45.222090  438411 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 21:15:45.237851  438411 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 21:15:45.244365  438411 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 21:15:45.387389  438411 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 21:15:45.540096  438411 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 21:15:45.540185  438411 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:15:45.549240  438411 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0813 21:15:46.654105  438411 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 21:15:46.660879  438411 start.go:413] Will wait 60s for crictl version
	I0813 21:15:46.660949  438411 ssh_runner.go:149] Run: sudo crictl version
	I0813 21:15:46.700701  438411 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0813 21:15:46.700776  438411 ssh_runner.go:149] Run: containerd --version
	I0813 21:15:46.739653  438411 ssh_runner.go:149] Run: containerd --version
	I0813 21:15:45.548044  437853 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-bjmlz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:46.478238  437512 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0813 21:15:46.478303  437512 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 21:15:46.478313  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 21:15:46.478332  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:15:46.478050  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:15:46.478522  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:14:05 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:15:46.478560  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:15:46.478785  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:15:46.478965  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:15:46.479099  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:15:46.479475  437512 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa Username:docker}
	I0813 21:15:46.485042  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:15:46.485751  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:14:05 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:15:46.485833  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:15:46.486072  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:15:46.486243  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:15:46.486470  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:15:46.486577  437512 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0813 21:15:46.486608  437512 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa Username:docker}
	I0813 21:15:46.486920  437512 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:15:46.487392  437512 main.go:130] libmachine: Using API Version  1
	I0813 21:15:46.487421  437512 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:15:46.487754  437512 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:15:46.487937  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetState
	I0813 21:15:46.491077  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .DriverName
	I0813 21:15:46.491319  437512 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 21:15:46.491336  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 21:15:46.491355  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHHostname
	I0813 21:15:46.497268  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:15:46.497649  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:cf:c7", ip: ""} in network mk-newest-cni-20210813211202-393438: {Iface:virbr3 ExpiryTime:2021-08-13 22:14:05 +0000 UTC Type:0 Mac:52:54:00:cc:cf:c7 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:newest-cni-20210813211202-393438 Clientid:01:52:54:00:cc:cf:c7}
	I0813 21:15:46.497679  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | domain newest-cni-20210813211202-393438 has defined IP address 192.168.61.119 and MAC address 52:54:00:cc:cf:c7 in network mk-newest-cni-20210813211202-393438
	I0813 21:15:46.497752  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHPort
	I0813 21:15:46.497902  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHKeyPath
	I0813 21:15:46.498099  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .GetSSHUsername
	I0813 21:15:46.498235  437512 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/newest-cni-20210813211202-393438/id_rsa Username:docker}
	I0813 21:15:46.627210  437512 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 21:15:46.670579  437512 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 21:15:46.670604  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 21:15:46.678778  437512 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 21:15:46.678800  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 21:15:46.718123  437512 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 21:15:46.779041  437512 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 21:15:46.779066  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 21:15:46.783166  437512 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 21:15:46.783186  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 21:15:46.786889  437512 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:15:46.786926  437512 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 21:15:46.786938  437512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:46.817684  437512 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:15:46.817707  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 21:15:46.848158  437512 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 21:15:46.857739  437512 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 21:15:46.857765  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 21:15:46.890145  437512 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 21:15:46.890170  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 21:15:46.955295  437512 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 21:15:46.955321  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 21:15:47.040044  437512 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 21:15:47.040070  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 21:15:47.181607  437512 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 21:15:47.181635  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 21:15:47.258789  437512 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 21:15:47.258816  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 21:15:47.346107  437512 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:15:47.346133  437512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 21:15:47.386081  437512 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 21:15:47.666909  437512 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.039642884s)
	I0813 21:15:47.666960  437512 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:47.666975  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Close
	I0813 21:15:47.667243  437512 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:47.667290  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Closing plugin on server side
	I0813 21:15:47.667295  437512 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:47.667312  437512 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:47.667322  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Close
	I0813 21:15:47.667818  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Closing plugin on server side
	I0813 21:15:47.667832  437512 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:47.667847  437512 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:47.806737  437512 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.019780808s)
	I0813 21:15:47.806772  437512 api_server.go:70] duration metric: took 1.403652943s to wait for apiserver process to appear ...
	I0813 21:15:47.806780  437512 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:15:47.806790  437512 api_server.go:239] Checking apiserver healthz at https://192.168.61.119:8443/healthz ...
	I0813 21:15:47.807219  437512 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:47.807241  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Close
	I0813 21:15:47.807449  437512 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.089288122s)
	I0813 21:15:47.807482  437512 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:47.807496  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Close
	I0813 21:15:47.807519  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Closing plugin on server side
	I0813 21:15:47.807550  437512 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:47.807558  437512 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:47.807567  437512 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:47.807575  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Close
	I0813 21:15:47.807700  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Closing plugin on server side
	I0813 21:15:47.807809  437512 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:47.807900  437512 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:47.807920  437512 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:47.807932  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Close
	I0813 21:15:47.807973  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Closing plugin on server side
	I0813 21:15:47.807992  437512 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:47.808001  437512 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:47.808012  437512 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210813211202-393438"
	I0813 21:15:47.808161  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Closing plugin on server side
	I0813 21:15:47.808230  437512 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:47.808262  437512 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:47.808305  437512 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:47.808341  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Close
	I0813 21:15:47.808669  437512 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:47.808683  437512 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:47.817067  437512 api_server.go:265] https://192.168.61.119:8443/healthz returned 200:
	ok
	I0813 21:15:47.818841  437512 api_server.go:139] control plane version: v1.22.0-rc.0
	I0813 21:15:47.818862  437512 api_server.go:129] duration metric: took 12.076708ms to wait for apiserver health ...
	I0813 21:15:47.818873  437512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:15:47.828766  437512 system_pods.go:59] 9 kube-system pods found
	I0813 21:15:47.828794  437512 system_pods.go:61] "coredns-78fcd69978-p4xpj" [f526b6f0-a3a3-4d19-a0eb-0ed3226e7877] Running
	I0813 21:15:47.828806  437512 system_pods.go:61] "coredns-78fcd69978-xp594" [4f772c12-7fdf-4ccd-b548-9089857210e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 21:15:47.828816  437512 system_pods.go:61] "etcd-newest-cni-20210813211202-393438" [31e482fb-2505-43ad-92cd-9542acf345e0] Running
	I0813 21:15:47.828825  437512 system_pods.go:61] "kube-apiserver-newest-cni-20210813211202-393438" [ed0358e6-2910-4454-bf17-3cfdab38213d] Running
	I0813 21:15:47.828833  437512 system_pods.go:61] "kube-controller-manager-newest-cni-20210813211202-393438" [761818d0-ebbb-435c-bf3f-52e4636e90e8] Running
	I0813 21:15:47.828839  437512 system_pods.go:61] "kube-proxy-29642" [9e8095d0-fe55-430e-a196-2252122e423f] Running
	I0813 21:15:47.828846  437512 system_pods.go:61] "kube-scheduler-newest-cni-20210813211202-393438" [0ccb79ce-95b3-434a-a7a0-401ebe0b1efc] Running
	I0813 21:15:47.828856  437512 system_pods.go:61] "metrics-server-7c784ccb57-dk79w" [ce935cb7-2885-46f8-b983-c24a00c0664e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 21:15:47.828865  437512 system_pods.go:61] "storage-provisioner" [37826e43-1db5-4693-b6ba-bebfca4824d4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 21:15:47.828874  437512 system_pods.go:74] duration metric: took 9.995286ms to wait for pod list to return data ...
	I0813 21:15:47.828884  437512 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:15:47.833790  437512 default_sa.go:45] found service account: "default"
	I0813 21:15:47.833815  437512 default_sa.go:55] duration metric: took 4.924035ms for default service account to be created ...
	I0813 21:15:47.833826  437512 kubeadm.go:547] duration metric: took 1.430706486s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0813 21:15:47.833848  437512 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:15:47.837821  437512 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:15:47.837848  437512 node_conditions.go:123] node cpu capacity is 2
	I0813 21:15:47.837862  437512 node_conditions.go:105] duration metric: took 4.008826ms to run NodePressure ...
	I0813 21:15:47.837872  437512 start.go:231] waiting for startup goroutines ...
	I0813 21:15:48.425976  437512 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.039837977s)
	I0813 21:15:48.426038  437512 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:48.426060  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Close
	I0813 21:15:48.426443  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) DBG | Closing plugin on server side
	I0813 21:15:48.426458  437512 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:48.426474  437512 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:48.426486  437512 main.go:130] libmachine: Making call to close driver server
	I0813 21:15:48.426555  437512 main.go:130] libmachine: (newest-cni-20210813211202-393438) Calling .Close
	I0813 21:15:48.426801  437512 main.go:130] libmachine: Successfully made call to close driver server
	I0813 21:15:48.426821  437512 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 21:15:48.428769  437512 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0813 21:15:48.428794  437512 addons.go:344] enableAddons completed in 2.025690093s
	I0813 21:15:48.491479  437512 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0813 21:15:48.493355  437512 out.go:177] 
	W0813 21:15:48.493542  437512 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0813 21:15:48.495222  437512 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0813 21:15:48.496754  437512 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210813211202-393438" cluster and "default" namespace by default
	I0813 21:15:47.438852  436805 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:15:47.456464  436805 api_server.go:70] duration metric: took 1m25.437949258s to wait for apiserver process to appear ...
	I0813 21:15:47.456492  436805 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:15:47.456523  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0813 21:15:47.456576  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 21:15:47.505913  436805 cri.go:76] found id: "aba41f9a58effab52b2606e6402a8d6e51b6ac55b1f0c6626f460d95ca393451"
	I0813 21:15:47.505949  436805 cri.go:76] found id: ""
	I0813 21:15:47.505959  436805 logs.go:270] 1 containers: [aba41f9a58effab52b2606e6402a8d6e51b6ac55b1f0c6626f460d95ca393451]
	I0813 21:15:47.506014  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:47.512003  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0813 21:15:47.512121  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 21:15:47.562255  436805 cri.go:76] found id: "70bbde4d3ea6895293027b4a5835f0b1a263977af6a140c658fa457f80740679"
	I0813 21:15:47.562283  436805 cri.go:76] found id: ""
	I0813 21:15:47.562292  436805 logs.go:270] 1 containers: [70bbde4d3ea6895293027b4a5835f0b1a263977af6a140c658fa457f80740679]
	I0813 21:15:47.562343  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:47.569950  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0813 21:15:47.570018  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 21:15:47.627884  436805 cri.go:76] found id: "549abc3852bfc93aa6a82d79e6783c5fe3e46d1f8d36c3b4a223d8d63bea0892"
	I0813 21:15:47.627919  436805 cri.go:76] found id: ""
	I0813 21:15:47.627928  436805 logs.go:270] 1 containers: [549abc3852bfc93aa6a82d79e6783c5fe3e46d1f8d36c3b4a223d8d63bea0892]
	I0813 21:15:47.627990  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:47.634950  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0813 21:15:47.635077  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 21:15:47.680678  436805 cri.go:76] found id: "8dbfbd8e3c08a37afac3138f8eff6cf4e07537077bff9a61055ab9dfa6983b86"
	I0813 21:15:47.680700  436805 cri.go:76] found id: ""
	I0813 21:15:47.680708  436805 logs.go:270] 1 containers: [8dbfbd8e3c08a37afac3138f8eff6cf4e07537077bff9a61055ab9dfa6983b86]
	I0813 21:15:47.680770  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:47.687666  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0813 21:15:47.687751  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 21:15:47.747094  436805 cri.go:76] found id: "b04a00b1021390880d267787cc3f78ce64cd295d67e7fe4fd12b6a70a6c6c050"
	I0813 21:15:47.747120  436805 cri.go:76] found id: ""
	I0813 21:15:47.747128  436805 logs.go:270] 1 containers: [b04a00b1021390880d267787cc3f78ce64cd295d67e7fe4fd12b6a70a6c6c050]
	I0813 21:15:47.747178  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:47.753941  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 21:15:47.754008  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 21:15:47.800786  436805 cri.go:76] found id: ""
	I0813 21:15:47.800813  436805 logs.go:270] 0 containers: []
	W0813 21:15:47.800822  436805 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 21:15:47.800831  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0813 21:15:47.800887  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 21:15:47.850888  436805 cri.go:76] found id: "3137521a265fd9115928a8b4212c6ed191c1fcf90cdd62eff71b62089af61aee"
	I0813 21:15:47.850959  436805 cri.go:76] found id: "4c64293d41db04d38faaad1924e63d14cc64d679f7c732a3cab03d2876358181"
	I0813 21:15:47.850984  436805 cri.go:76] found id: ""
	I0813 21:15:47.851007  436805 logs.go:270] 2 containers: [3137521a265fd9115928a8b4212c6ed191c1fcf90cdd62eff71b62089af61aee 4c64293d41db04d38faaad1924e63d14cc64d679f7c732a3cab03d2876358181]
	I0813 21:15:47.851097  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:47.857718  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:47.867701  436805 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 21:15:47.867765  436805 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 21:15:47.911579  436805 cri.go:76] found id: "e046b6d8979b83684d1dee89eada5f0e3c09731a2a55641c5667f6614ece4df8"
	I0813 21:15:47.911604  436805 cri.go:76] found id: ""
	I0813 21:15:47.911611  436805 logs.go:270] 1 containers: [e046b6d8979b83684d1dee89eada5f0e3c09731a2a55641c5667f6614ece4df8]
	I0813 21:15:47.911657  436805 ssh_runner.go:149] Run: which crictl
	I0813 21:15:47.917687  436805 logs.go:123] Gathering logs for containerd ...
	I0813 21:15:47.917721  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0813 21:15:47.968208  436805 logs.go:123] Gathering logs for container status ...
	I0813 21:15:47.968255  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 21:15:48.043680  436805 logs.go:123] Gathering logs for dmesg ...
	I0813 21:15:48.043725  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 21:15:48.061555  436805 logs.go:123] Gathering logs for kube-apiserver [aba41f9a58effab52b2606e6402a8d6e51b6ac55b1f0c6626f460d95ca393451] ...
	I0813 21:15:48.061592  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 aba41f9a58effab52b2606e6402a8d6e51b6ac55b1f0c6626f460d95ca393451"
	I0813 21:15:48.182105  436805 logs.go:123] Gathering logs for coredns [549abc3852bfc93aa6a82d79e6783c5fe3e46d1f8d36c3b4a223d8d63bea0892] ...
	I0813 21:15:48.182159  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 549abc3852bfc93aa6a82d79e6783c5fe3e46d1f8d36c3b4a223d8d63bea0892"
	I0813 21:15:48.237209  436805 logs.go:123] Gathering logs for kube-scheduler [8dbfbd8e3c08a37afac3138f8eff6cf4e07537077bff9a61055ab9dfa6983b86] ...
	I0813 21:15:48.237253  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8dbfbd8e3c08a37afac3138f8eff6cf4e07537077bff9a61055ab9dfa6983b86"
	I0813 21:15:48.329623  436805 logs.go:123] Gathering logs for kube-proxy [b04a00b1021390880d267787cc3f78ce64cd295d67e7fe4fd12b6a70a6c6c050] ...
	I0813 21:15:48.329665  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 b04a00b1021390880d267787cc3f78ce64cd295d67e7fe4fd12b6a70a6c6c050"
	I0813 21:15:48.375953  436805 logs.go:123] Gathering logs for storage-provisioner [3137521a265fd9115928a8b4212c6ed191c1fcf90cdd62eff71b62089af61aee] ...
	I0813 21:15:48.375994  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3137521a265fd9115928a8b4212c6ed191c1fcf90cdd62eff71b62089af61aee"
	I0813 21:15:48.424831  436805 logs.go:123] Gathering logs for kubelet ...
	I0813 21:15:48.424878  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0813 21:15:48.513244  436805 logs.go:138] Found kubelet problem: Aug 13 21:14:21 cilium-20210813205926-393438 kubelet[2812]: E0813 21:14:21.514897    2812 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:cilium-20210813205926-393438" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'cilium-20210813205926-393438' and this object
	I0813 21:15:48.523399  436805 logs.go:123] Gathering logs for describe nodes ...
	I0813 21:15:48.523433  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 21:15:48.775937  436805 logs.go:123] Gathering logs for etcd [70bbde4d3ea6895293027b4a5835f0b1a263977af6a140c658fa457f80740679] ...
	I0813 21:15:48.775982  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 70bbde4d3ea6895293027b4a5835f0b1a263977af6a140c658fa457f80740679"
	I0813 21:15:48.832912  436805 logs.go:123] Gathering logs for storage-provisioner [4c64293d41db04d38faaad1924e63d14cc64d679f7c732a3cab03d2876358181] ...
	I0813 21:15:48.832952  436805 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 4c64293d41db04d38faaad1924e63d14cc64d679f7c732a3cab03d2876358181"
	I0813 21:15:46.775176  438411 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0813 21:15:46.775248  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) Calling .GetIP
	I0813 21:15:46.781874  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:46.782278  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:81:dc", ip: ""} in network mk-custom-weave-20210813205926-393438: {Iface:virbr1 ExpiryTime:2021-08-13 22:15:21 +0000 UTC Type:0 Mac:52:54:00:0d:81:dc Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-weave-20210813205926-393438 Clientid:01:52:54:00:0d:81:dc}
	I0813 21:15:46.782308  438411 main.go:130] libmachine: (custom-weave-20210813205926-393438) DBG | domain custom-weave-20210813205926-393438 has defined IP address 192.168.39.226 and MAC address 52:54:00:0d:81:dc in network mk-custom-weave-20210813205926-393438
	I0813 21:15:46.782611  438411 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 21:15:46.788480  438411 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:15:46.801474  438411 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 21:15:46.801564  438411 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:15:46.842634  438411 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:15:46.842725  438411 containerd.go:517] Images already preloaded, skipping extraction
	I0813 21:15:46.842811  438411 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 21:15:46.882814  438411 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 21:15:46.882847  438411 cache_images.go:74] Images are preloaded, skipping loading
	I0813 21:15:46.882912  438411 ssh_runner.go:149] Run: sudo crictl info
	I0813 21:15:46.925947  438411 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0813 21:15:46.925993  438411 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 21:15:46.926011  438411 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20210813205926-393438 NodeName:custom-weave-20210813205926-393438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.226 CgroupDriver:cgro
upfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 21:15:46.926185  438411 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "custom-weave-20210813205926-393438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 21:15:46.926292  438411 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=custom-weave-20210813205926-393438 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.226 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813205926-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0813 21:15:46.926352  438411 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 21:15:46.936513  438411 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 21:15:46.936586  438411 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 21:15:46.946466  438411 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (550 bytes)
	I0813 21:15:46.962511  438411 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 21:15:46.977175  438411 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes)
	I0813 21:15:46.993218  438411 ssh_runner.go:149] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0813 21:15:46.999160  438411 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 21:15:47.012477  438411 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438 for IP: 192.168.39.226
	I0813 21:15:47.012548  438411 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key
	I0813 21:15:47.012568  438411 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key
	I0813 21:15:47.012639  438411 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/client.key
	I0813 21:15:47.012651  438411 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/client.crt with IP's: []
	I0813 21:15:47.124768  438411 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/client.crt ...
	I0813 21:15:47.124805  438411 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/client.crt: {Name:mkf68177a82f56c883cedec6bf0c04ccaca8ebf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:15:47.125023  438411 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/client.key ...
	I0813 21:15:47.125042  438411 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/client.key: {Name:mk763eadae7ef896b9df59472a9d8f4821ab2a12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:15:47.125162  438411 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.key.1754aec7
	I0813 21:15:47.125175  438411 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.crt.1754aec7 with IP's: [192.168.39.226 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 21:15:47.294473  438411 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.crt.1754aec7 ...
	I0813 21:15:47.294513  438411 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.crt.1754aec7: {Name:mk174d7ae87de36f1a33026f02855b79dd1091cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:15:47.294725  438411 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.key.1754aec7 ...
	I0813 21:15:47.294745  438411 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.key.1754aec7: {Name:mk229f9505e708bb4da0cbf5a17757e74cd1c79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:15:47.294837  438411 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.crt.1754aec7 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.crt
	I0813 21:15:47.294901  438411 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.key.1754aec7 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.key
	I0813 21:15:47.294950  438411 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/proxy-client.key
	I0813 21:15:47.294958  438411 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/proxy-client.crt with IP's: []
	I0813 21:15:47.426276  438411 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/proxy-client.crt ...
	I0813 21:15:47.426308  438411 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/proxy-client.crt: {Name:mk0ab7f57b3540ac2fadf9373aaf3ec7a4298000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:15:47.426508  438411 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/proxy-client.key ...
	I0813 21:15:47.426526  438411 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/proxy-client.key: {Name:mk7673696f0f1a097f3b8c96c8380928ab980ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 21:15:47.426783  438411 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem (1338 bytes)
	W0813 21:15:47.426839  438411 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438_empty.pem, impossibly tiny 0 bytes
	I0813 21:15:47.426853  438411 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 21:15:47.426886  438411 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem (1078 bytes)
	I0813 21:15:47.426917  438411 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem (1123 bytes)
	I0813 21:15:47.426957  438411 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/key.pem (1675 bytes)
	I0813 21:15:47.427021  438411 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem (1708 bytes)
	I0813 21:15:47.428016  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 21:15:47.448572  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 21:15:47.472404  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 21:15:47.496611  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/custom-weave-20210813205926-393438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 21:15:47.519119  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 21:15:47.542018  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 21:15:47.566171  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 21:15:47.588205  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 21:15:47.611503  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/ssl/certs/3934382.pem --> /usr/share/ca-certificates/3934382.pem (1708 bytes)
	I0813 21:15:47.635773  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 21:15:47.661086  438411 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/393438.pem --> /usr/share/ca-certificates/393438.pem (1338 bytes)
	I0813 21:15:47.686614  438411 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 21:15:47.703792  438411 ssh_runner.go:149] Run: openssl version
	I0813 21:15:47.712952  438411 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3934382.pem && ln -fs /usr/share/ca-certificates/3934382.pem /etc/ssl/certs/3934382.pem"
	I0813 21:15:47.724791  438411 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3934382.pem
	I0813 21:15:47.731938  438411 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 13 20:20 /usr/share/ca-certificates/3934382.pem
	I0813 21:15:47.732002  438411 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3934382.pem
	I0813 21:15:47.740805  438411 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3934382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 21:15:47.752708  438411 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 21:15:47.766917  438411 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:15:47.772551  438411 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 13 20:09 /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:15:47.772603  438411 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 21:15:47.779677  438411 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 21:15:47.788842  438411 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/393438.pem && ln -fs /usr/share/ca-certificates/393438.pem /etc/ssl/certs/393438.pem"
	I0813 21:15:47.800090  438411 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/393438.pem
	I0813 21:15:47.807003  438411 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 13 20:20 /usr/share/ca-certificates/393438.pem
	I0813 21:15:47.807055  438411 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393438.pem
	I0813 21:15:47.820201  438411 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/393438.pem /etc/ssl/certs/51391683.0"
	I0813 21:15:47.832695  438411 kubeadm.go:390] StartCluster: {Name:custom-weave-20210813205926-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
21.3 ClusterName:custom-weave-20210813205926-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:15:47.832794  438411 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 21:15:47.832858  438411 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 21:15:47.877515  438411 cri.go:76] found id: ""
	I0813 21:15:47.877585  438411 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 21:15:47.889689  438411 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 21:15:47.899859  438411 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 21:15:47.910101  438411 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 21:15:47.910178  438411 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 21:15:48.955171  438411 out.go:204]   - Generating certificates and keys ...
	I0813 21:15:47.734102  437853 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-bjmlz" in "kube-system" namespace has status "Ready":"False"
	I0813 21:15:50.231694  437853 pod_ready.go:92] pod "calico-kube-controllers-58497c65d5-bjmlz" in "kube-system" namespace has status "Ready":"True"
	I0813 21:15:50.231726  437853 pod_ready.go:81] duration metric: took 18.023673167s waiting for pod "calico-kube-controllers-58497c65d5-bjmlz" in "kube-system" namespace to be "Ready" ...
	I0813 21:15:50.231742  437853 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-55xx6" in "kube-system" namespace to be "Ready" ...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	7c3f653711073       cf9cba6c3e4a8       17 seconds ago      Running             kube-controller-manager   4                   88af86424b3ca
	c3024159112ae       b2462aa94d403       23 seconds ago      Running             kube-apiserver            1                   7cf9902b15a51
	c6b1250932083       0048118155842       28 seconds ago      Running             etcd                      1                   39b7a0897fe8b
	5084fc733d67e       7da2efaa5b480       38 seconds ago      Running             kube-scheduler            1                   0ed4f38ed8ccc
	116848b32df56       cf9cba6c3e4a8       50 seconds ago      Exited              kube-controller-manager   3                   88af86424b3ca
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 21:14:05 UTC, end at Fri 2021-08-13 21:15:52 UTC. --
	Aug 13 21:15:13 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:13.974940680Z" level=error msg="CreateContainer within sandbox \"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-078687633 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/35: file exists"
	Aug 13 21:15:14 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:14.269996621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-newest-cni-20210813211202-393438,Uid:23f20a6ab1f9bfbd5f9cb779fc57f4ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65\""
	Aug 13 21:15:14 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:14.279548640Z" level=info msg="CreateContainer within sandbox \"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
	Aug 13 21:15:14 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:14.328886490Z" level=info msg="CreateContainer within sandbox \"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1\""
	Aug 13 21:15:14 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:14.329779026Z" level=info msg="StartContainer for \"5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1\""
	Aug 13 21:15:14 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:14.760090018Z" level=info msg="StartContainer for \"5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1\" returns successfully"
	Aug 13 21:15:15 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:15.809292676Z" level=info msg="PullImage \"k8s.gcr.io/etcd:3.5.0-0\""
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.433043087Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/etcd:3.5.0-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.437361525Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.441026707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/etcd:3.5.0-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.444041365Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.444574191Z" level=info msg="PullImage \"k8s.gcr.io/etcd:3.5.0-0\" returns image reference \"sha256:0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba\""
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.449718035Z" level=info msg="CreateContainer within sandbox \"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e\" for container &ContainerMetadata{Name:etcd,Attempt:1,}"
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.723429988Z" level=info msg="CreateContainer within sandbox \"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e\" for &ContainerMetadata{Name:etcd,Attempt:1,} returns container id \"c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6\""
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.724398735Z" level=info msg="StartContainer for \"c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6\""
	Aug 13 21:15:24 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:24.140636481Z" level=info msg="StartContainer for \"c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6\" returns successfully"
	Aug 13 21:15:28 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:28.813753637Z" level=info msg="CreateContainer within sandbox \"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}"
	Aug 13 21:15:28 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:28.871547596Z" level=info msg="CreateContainer within sandbox \"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847\""
	Aug 13 21:15:28 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:28.873257444Z" level=info msg="StartContainer for \"c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847\""
	Aug 13 21:15:29 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:29.418344557Z" level=info msg="StartContainer for \"c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847\" returns successfully"
	Aug 13 21:15:34 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:34.815030463Z" level=info msg="CreateContainer within sandbox \"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}"
	Aug 13 21:15:34 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:34.868296551Z" level=info msg="CreateContainer within sandbox \"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f\""
	Aug 13 21:15:34 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:34.870434423Z" level=info msg="StartContainer for \"7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f\""
	Aug 13 21:15:35 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:35.700829030Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 13 21:15:35 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:35.772433833Z" level=info msg="StartContainer for \"7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug13 21:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.099111] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug13 21:14] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000029] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.657200] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.038046] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.036875] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1734 comm=systemd-network
	[  +0.934988] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.243361] vboxguest: loading out-of-tree module taints kernel.
	[  +0.011161] vboxguest: PCI device not found, probably running on physical hardware.
	[ +23.456214] systemd-fstab-generator[2074]: Ignoring "noauto" for root device
	[  +1.300348] systemd-fstab-generator[2107]: Ignoring "noauto" for root device
	[  +0.129987] systemd-fstab-generator[2120]: Ignoring "noauto" for root device
	[  +0.220069] systemd-fstab-generator[2150]: Ignoring "noauto" for root device
	[  +6.856447] systemd-fstab-generator[2346]: Ignoring "noauto" for root device
	[Aug13 21:15] systemd-fstab-generator[3104]: Ignoring "noauto" for root device
	[  +0.763030] systemd-fstab-generator[3159]: Ignoring "noauto" for root device
	[  +0.973563] systemd-fstab-generator[3212]: Ignoring "noauto" for root device
	[Aug13 21:16] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6] <==
	* {"level":"warn","ts":"2021-08-13T21:15:45.474Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"5.040887474s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.480Z","caller":"traceutil/trace.go:171","msg":"trace[1946565333] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:543; }","duration":"5.046178187s","start":"2021-08-13T21:15:40.433Z","end":"2021-08-13T21:15:45.479Z","steps":["trace[1946565333] 'range keys from in-memory index tree'  (duration: 5.04029265s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.482Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"4.241684637s","expected-duration":"100ms","prefix":"","request":"header:<ID:3642140244349460335 > lease_revoke:<id:328b7b415c780801>","response":"size:28"}
	{"level":"info","ts":"2021-08-13T21:15:45.483Z","caller":"traceutil/trace.go:171","msg":"trace[1988578343] linearizableReadLoop","detail":"{readStateIndex:567; appliedIndex:565; }","duration":"4.238399066s","start":"2021-08-13T21:15:41.244Z","end":"2021-08-13T21:15:45.483Z","steps":["trace[1988578343] 'read index received'  (duration: 3.268448854s)","trace[1988578343] 'applied index is now lower than readState.Index'  (duration: 969.93816ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T21:15:45.476Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"8.165339924s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:expand-controller\" ","response":"range_response_count:1 size:745"}
	{"level":"info","ts":"2021-08-13T21:15:45.484Z","caller":"traceutil/trace.go:171","msg":"trace[185560543] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:expand-controller; range_end:; response_count:1; response_revision:543; }","duration":"8.173511245s","start":"2021-08-13T21:15:37.310Z","end":"2021-08-13T21:15:45.484Z","steps":["trace[185560543] 'range keys from in-memory index tree'  (duration: 8.163686974s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.485Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:37.310Z","time spent":"8.174930389s","remote":"127.0.0.1:49096","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":768,"request content":"key:\"/registry/clusterrolebindings/system:controller:expand-controller\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.474Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.046803662s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.486Z","caller":"traceutil/trace.go:171","msg":"trace[1776376833] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"6.059156266s","start":"2021-08-13T21:15:39.427Z","end":"2021-08-13T21:15:45.486Z","steps":["trace[1776376833] 'range keys from in-memory index tree'  (duration: 6.045692216s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:39.427Z","time spent":"6.059219706s","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.475Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"8.130205905s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.489Z","caller":"traceutil/trace.go:171","msg":"trace[729064226] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"8.144792565s","start":"2021-08-13T21:15:37.344Z","end":"2021-08-13T21:15:45.489Z","steps":["trace[729064226] 'range keys from in-memory index tree'  (duration: 8.129174734s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.490Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:37.344Z","time spent":"8.145250502s","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.474Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"7.234318221s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.491Z","caller":"traceutil/trace.go:171","msg":"trace[245584525] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"7.252074411s","start":"2021-08-13T21:15:38.239Z","end":"2021-08-13T21:15:45.491Z","steps":["trace[245584525] 'range keys from in-memory index tree'  (duration: 7.234018007s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.493Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:38.239Z","time spent":"7.252420536s","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.487Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.861781651s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.493Z","caller":"traceutil/trace.go:171","msg":"trace[562982811] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"1.867724197s","start":"2021-08-13T21:15:43.625Z","end":"2021-08-13T21:15:45.493Z","steps":["trace[562982811] 'agreement among raft nodes before linearized reading'  (duration: 1.857923084s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.493Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:43.625Z","time spent":"1.867794956s","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.487Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"610.879228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.494Z","caller":"traceutil/trace.go:171","msg":"trace[44290579] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"617.154665ms","start":"2021-08-13T21:15:44.876Z","end":"2021-08-13T21:15:45.494Z","steps":["trace[44290579] 'agreement among raft nodes before linearized reading'  (duration: 610.838286ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.494Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:44.876Z","time spent":"617.216149ms","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.487Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.242852345s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.504Z","caller":"traceutil/trace.go:171","msg":"trace[1547725281] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"1.25977393s","start":"2021-08-13T21:15:44.245Z","end":"2021-08-13T21:15:45.504Z","steps":["trace[1547725281] 'agreement among raft nodes before linearized reading'  (duration: 1.242814194s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.504Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:44.245Z","time spent":"1.259841649s","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	
	* 
	* ==> kernel <==
	*  21:16:32 up 2 min,  0 users,  load average: 1.17, 0.61, 0.23
	Linux newest-cni-20210813211202-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847] <==
	* I0813 21:15:35.674966       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 21:15:35.679152       1 cache.go:39] Caches are synced for autoregister controller
	I0813 21:15:35.681323       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 21:15:36.216740       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:15:37.336621       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:15:37.336904       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	W0813 21:15:38.336697       1 handler_proxy.go:104] no RequestInfo found in the context
	E0813 21:15:38.337395       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:15:38.338267       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 21:15:45.495743       1 trace.go:205] Trace[2112370956]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:2beb930e-f810-4a6a-806e-0349d8fad1de,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 21:15:37.292) (total time: 8202ms):
	Trace[2112370956]: ---"About to write a response" 8201ms (21:15:45.494)
	Trace[2112370956]: [8.202675954s] [8.202675954s] END
	I0813 21:15:45.500044       1 trace.go:205] Trace[697574104]: "GuaranteedUpdate etcd3" type:*core.Event (13-Aug-2021 21:15:37.441) (total time: 8058ms):
	Trace[697574104]: ---"initial value restored" 8039ms (21:15:45.480)
	Trace[697574104]: [8.058757473s] [8.058757473s] END
	I0813 21:15:45.501899       1 trace.go:205] Trace[126321225]: "Patch" url:/api/v1/namespaces/default/events/newest-cni-20210813211202-393438.169afa2c97dd0b10,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:df70debe-3909-4bca-b25f-6b3f66d948f1,client:192.168.61.119,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 21:15:37.440) (total time: 8060ms):
	Trace[126321225]: ---"About to apply patch" 8039ms (21:15:45.480)
	Trace[126321225]: [8.060597478s] [8.060597478s] END
	I0813 21:15:46.161387       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:15:46.194245       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:15:46.317408       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:15:46.349236       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:15:46.369688       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 21:15:47.942756       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-controller-manager [116848b32df564f7527d30d663d98dc9754820ec664386dba0d7b2d25e56a787] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:151 +0x89
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).processNextWorkItem(0xc000829c00, 0x203000)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:263 +0x66
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).runWorker(...)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:258
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000286ed0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000286ed0, 0x5175ae0, 0xc000c9e570, 0x4c62101, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000286ed0, 0x3b9aca00, 0x0, 0x1, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000286ed0, 0x3b9aca00, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1d2
	
	goroutine 144 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000286ef0, 0x5175ae0, 0xc000c24690, 0x440a54e7e3c50701, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000286ef0, 0xdf8475800, 0x0, 0x9f0c8eb260a49901, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000286ef0, 0xdf8475800, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x24b
	
	* 
	* ==> kube-controller-manager [7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f] <==
	* I0813 21:15:48.024004       1 cronjob_controllerv2.go:125] "Starting cronjob controller v2"
	I0813 21:15:48.024381       1 shared_informer.go:240] Waiting for caches to sync for cronjob
	I0813 21:15:48.035084       1 controllermanager.go:577] Started "endpointslicemirroring"
	I0813 21:15:48.039794       1 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
	I0813 21:15:48.040041       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0813 21:15:48.055058       1 controllermanager.go:577] Started "disruption"
	I0813 21:15:48.058328       1 disruption.go:363] Starting disruption controller
	I0813 21:15:48.058896       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0813 21:15:48.066149       1 controllermanager.go:577] Started "statefulset"
	I0813 21:15:48.066754       1 stateful_set.go:148] Starting stateful set controller
	I0813 21:15:48.067749       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0813 21:15:48.091305       1 controllermanager.go:577] Started "persistentvolume-binder"
	I0813 21:15:48.094568       1 pv_controller_base.go:308] Starting persistent volume controller
	I0813 21:15:48.095364       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0813 21:15:48.126077       1 attach_detach_controller.go:328] Starting attach detach controller
	I0813 21:15:48.126447       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0813 21:15:48.127715       1 controllermanager.go:577] Started "attachdetach"
	I0813 21:15:48.142252       1 controllermanager.go:577] Started "root-ca-cert-publisher"
	I0813 21:15:48.143766       1 publisher.go:107] Starting root CA certificate configmap publisher
	I0813 21:15:48.144043       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0813 21:15:48.153989       1 controllermanager.go:577] Started "csrcleaner"
	I0813 21:15:48.154785       1 cleaner.go:82] Starting CSR cleaner controller
	I0813 21:15:48.175698       1 controllermanager.go:577] Started "bootstrapsigner"
	I0813 21:15:48.176801       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0813 21:15:48.201227       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1] <==
	* E0813 21:15:25.034371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.119:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.119:8443: connect: connection refused
	E0813 21:15:25.592124       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.119:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.119:8443: connect: connection refused
	E0813 21:15:25.605660       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.119:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.119:8443: connect: connection refused
	E0813 21:15:29.289283       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.119:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.119:8443: connect: connection refused
	E0813 21:15:35.317677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:15:35.321290       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:15:35.321394       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:15:35.321666       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:15:35.321940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:15:35.322055       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:15:35.322148       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:15:35.322266       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 21:15:35.322363       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:15:35.324710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:15:35.379529       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:15:35.442188       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:15:36.005200       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.007622       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.007978       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.008279       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.008607       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.008706       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.008892       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.008956       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.009242       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:14:05 UTC, end at Fri 2021-08-13 21:16:32 UTC. --
	Aug 13 21:15:47 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:47.554092    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:47 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:47.654864    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:47 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:47.755270    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:47 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:47.857346    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:47 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:47.957855    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.059374    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.163155    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.264042    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.364602    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.465299    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.566831    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.668662    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.769756    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.870447    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.971712    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.072617    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.173301    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.274100    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.375072    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.476060    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.576636    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: I0813 21:15:49.663562    2354 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:15:49 newest-cni-20210813211202-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:15:49 newest-cni-20210813211202-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 21:16:32.600470  438890 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813211202-393438 -n newest-cni-20210813211202-393438

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813211202-393438 -n newest-cni-20210813211202-393438: exit status 2 (278.086741ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210813211202-393438 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210813211202-393438 logs -n 25: exit status 110 (41.003585932s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                Profile                |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | old-k8s-version-20210813205952-393438                      | old-k8s-version-20210813205952-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:43 UTC | Fri, 13 Aug 2021 21:12:44 UTC |
	|         | logs -n 25                                                 |                                       |         |         |                               |                               |
	| -p      | old-k8s-version-20210813205952-393438                      | old-k8s-version-20210813205952-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:45 UTC | Fri, 13 Aug 2021 21:12:46 UTC |
	|         | logs -n 25                                                 |                                       |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205952-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:47 UTC | Fri, 13 Aug 2021 21:12:48 UTC |
	|         | old-k8s-version-20210813205952-393438                      |                                       |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210813205952-393438 | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:48 UTC | Fri, 13 Aug 2021 21:12:48 UTC |
	|         | old-k8s-version-20210813205952-393438                      |                                       |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210813210115-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:05:02 UTC | Fri, 13 Aug 2021 21:13:29 UTC |
	|         | embed-certs-20210813210115-393438                          |                                       |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                       |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                       |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                       |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                       |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                       |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813210115-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:43 UTC | Fri, 13 Aug 2021 21:13:44 UTC |
	|         | embed-certs-20210813210115-393438                          |                                       |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                       |         |         |                               |                               |
	| start   | -p newest-cni-20210813211202-393438 --memory=2200          | newest-cni-20210813211202-393438      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:02 UTC | Fri, 13 Aug 2021 21:13:48 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                       |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                       |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                       |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                       |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=containerd              |                                       |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                       |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813211202-393438      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:48 UTC | Fri, 13 Aug 2021 21:13:49 UTC |
	|         | newest-cni-20210813211202-393438                           |                                       |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                       |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                       |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813211202-393438      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:49 UTC | Fri, 13 Aug 2021 21:13:53 UTC |
	|         | newest-cni-20210813211202-393438                           |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                       |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813211202-393438      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:53 UTC | Fri, 13 Aug 2021 21:13:53 UTC |
	|         | newest-cni-20210813211202-393438                           |                                       |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                       |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813210115-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:14:10 UTC | Fri, 13 Aug 2021 21:14:11 UTC |
	|         | embed-certs-20210813210115-393438                          |                                       |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210813210115-393438     | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:14:11 UTC | Fri, 13 Aug 2021 21:14:11 UTC |
	|         | embed-certs-20210813210115-393438                          |                                       |         |         |                               |                               |
	| start   | -p auto-20210813205925-393438                              | auto-20210813205925-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:42 UTC | Fri, 13 Aug 2021 21:14:48 UTC |
	|         | --memory=2048                                              |                                       |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                       |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                       |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                       |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                       |         |         |                               |                               |
	| ssh     | -p auto-20210813205925-393438                              | auto-20210813205925-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:14:48 UTC | Fri, 13 Aug 2021 21:14:48 UTC |
	|         | pgrep -a kubelet                                           |                                       |         |         |                               |                               |
	| delete  | -p auto-20210813205925-393438                              | auto-20210813205925-393438            | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:15:03 UTC | Fri, 13 Aug 2021 21:15:04 UTC |
	| start   | -p newest-cni-20210813211202-393438 --memory=2200          | newest-cni-20210813211202-393438      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:13:54 UTC | Fri, 13 Aug 2021 21:15:48 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                       |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                       |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                       |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                       |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=containerd              |                                       |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                       |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813211202-393438      | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:15:48 UTC | Fri, 13 Aug 2021 21:15:49 UTC |
	|         | newest-cni-20210813211202-393438                           |                                       |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                       |         |         |                               |                               |
	| start   | -p                                                         | calico-20210813205926-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:14:11 UTC | Fri, 13 Aug 2021 21:16:10 UTC |
	|         | calico-20210813205926-393438                               |                                       |         |         |                               |                               |
	|         | --memory=2048                                              |                                       |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                       |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                       |         |         |                               |                               |
	|         | --cni=calico --driver=kvm2                                 |                                       |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                       |         |         |                               |                               |
	| start   | -p                                                         | cilium-20210813205926-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:12:48 UTC | Fri, 13 Aug 2021 21:16:10 UTC |
	|         | cilium-20210813205926-393438                               |                                       |         |         |                               |                               |
	|         | --memory=2048                                              |                                       |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                       |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                       |         |         |                               |                               |
	|         | --cni=cilium --driver=kvm2                                 |                                       |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                       |         |         |                               |                               |
	| ssh     | -p                                                         | calico-20210813205926-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:16:15 UTC | Fri, 13 Aug 2021 21:16:15 UTC |
	|         | calico-20210813205926-393438                               |                                       |         |         |                               |                               |
	|         | pgrep -a kubelet                                           |                                       |         |         |                               |                               |
	| ssh     | -p                                                         | cilium-20210813205926-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:16:15 UTC | Fri, 13 Aug 2021 21:16:15 UTC |
	|         | cilium-20210813205926-393438                               |                                       |         |         |                               |                               |
	|         | pgrep -a kubelet                                           |                                       |         |         |                               |                               |
	| delete  | -p                                                         | calico-20210813205926-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:16:26 UTC | Fri, 13 Aug 2021 21:16:27 UTC |
	|         | calico-20210813205926-393438                               |                                       |         |         |                               |                               |
	| delete  | -p                                                         | cilium-20210813205926-393438          | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:16:29 UTC | Fri, 13 Aug 2021 21:16:30 UTC |
	|         | cilium-20210813205926-393438                               |                                       |         |         |                               |                               |
	| start   | -p                                                         | custom-weave-20210813205926-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:15:04 UTC | Fri, 13 Aug 2021 21:16:32 UTC |
	|         | custom-weave-20210813205926-393438                         |                                       |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                            |                                       |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                       |         |         |                               |                               |
	|         | --cni=testdata/weavenet.yaml                               |                                       |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                       |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                       |         |         |                               |                               |
	| ssh     | -p                                                         | custom-weave-20210813205926-393438    | jenkins | v1.22.0 | Fri, 13 Aug 2021 21:16:33 UTC | Fri, 13 Aug 2021 21:16:33 UTC |
	|         | custom-weave-20210813205926-393438                         |                                       |         |         |                               |                               |
	|         | pgrep -a kubelet                                           |                                       |         |         |                               |                               |
	|---------|------------------------------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 21:16:30
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 21:16:30.420421  439386 out.go:298] Setting OutFile to fd 1 ...
	I0813 21:16:30.420504  439386 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:16:30.420509  439386 out.go:311] Setting ErrFile to fd 2...
	I0813 21:16:30.420515  439386 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 21:16:30.420639  439386 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 21:16:30.420975  439386 out.go:305] Setting JSON to false
	I0813 21:16:30.461138  439386 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":7153,"bootTime":1628882238,"procs":178,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 21:16:30.461245  439386 start.go:121] virtualization: kvm guest
	I0813 21:16:30.463628  439386 out.go:177] * [flannel-20210813205926-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 21:16:30.465141  439386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 21:16:30.463795  439386 notify.go:169] Checking for updates...
	I0813 21:16:30.466488  439386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 21:16:30.467778  439386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:16:30.469043  439386 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 21:16:30.469695  439386 config.go:177] Loaded profile config "custom-weave-20210813205926-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:16:30.469852  439386 config.go:177] Loaded profile config "kindnet-20210813205926-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 21:16:30.470012  439386 config.go:177] Loaded profile config "newest-cni-20210813211202-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 21:16:30.470065  439386 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 21:16:30.503967  439386 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 21:16:30.503996  439386 start.go:278] selected driver: kvm2
	I0813 21:16:30.504003  439386 start.go:751] validating driver "kvm2" against <nil>
	I0813 21:16:30.504022  439386 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 21:16:30.505335  439386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:16:30.505499  439386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 21:16:30.517189  439386 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0813 21:16:30.517245  439386 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 21:16:30.517421  439386 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 21:16:30.517444  439386 cni.go:93] Creating CNI manager for "flannel"
	I0813 21:16:30.517452  439386 start_flags.go:272] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0813 21:16:30.517461  439386 start_flags.go:277] config:
	{Name:flannel-20210813205926-393438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:flannel-20210813205926-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 21:16:30.517585  439386 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 21:16:27.906741  439142 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 21:16:27.906881  439142 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin/docker-machine-driver-kvm2
	I0813 21:16:27.906942  439142 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 21:16:27.916786  439142 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45343
	I0813 21:16:27.917201  439142 main.go:130] libmachine: () Calling .GetVersion
	I0813 21:16:27.917693  439142 main.go:130] libmachine: Using API Version  1
	I0813 21:16:27.917712  439142 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 21:16:27.918097  439142 main.go:130] libmachine: () Calling .GetMachineName
	I0813 21:16:27.918264  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Calling .GetMachineName
	I0813 21:16:27.918405  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Calling .DriverName
	I0813 21:16:27.918538  439142 start.go:160] libmachine.API.Create for "kindnet-20210813205926-393438" (driver="kvm2")
	I0813 21:16:27.918569  439142 client.go:168] LocalClient.Create starting
	I0813 21:16:27.918601  439142 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/ca.pem
	I0813 21:16:27.918630  439142 main.go:130] libmachine: Decoding PEM data...
	I0813 21:16:27.918651  439142 main.go:130] libmachine: Parsing certificate...
	I0813 21:16:27.918813  439142 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/certs/cert.pem
	I0813 21:16:27.918842  439142 main.go:130] libmachine: Decoding PEM data...
	I0813 21:16:27.918874  439142 main.go:130] libmachine: Parsing certificate...
	I0813 21:16:27.918934  439142 main.go:130] libmachine: Running pre-create checks...
	I0813 21:16:27.918948  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Calling .PreCreateCheck
	I0813 21:16:27.919256  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Calling .GetConfigRaw
	I0813 21:16:27.919762  439142 main.go:130] libmachine: Creating machine...
	I0813 21:16:27.919779  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Calling .Create
	I0813 21:16:27.919894  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Creating KVM machine...
	I0813 21:16:27.922724  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | found existing default KVM network
	I0813 21:16:27.924921  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:27.924725  439166 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:18:9f:d1}}
	I0813 21:16:27.925902  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:27.925826  439166 network.go:240] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fc:46:2e}}
	I0813 21:16:27.927275  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:27.927182  439166 network.go:240] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:17:a6:3e}}
	I0813 21:16:27.929001  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:27.928908  439166 network.go:288] reserving subnet 192.168.72.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.72.0:0xc0000beb08] misses:0}
	I0813 21:16:27.929043  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:27.928951  439166 network.go:235] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 21:16:27.949049  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | trying to create private KVM network mk-kindnet-20210813205926-393438 192.168.72.0/24...
	I0813 21:16:28.235187  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | private KVM network mk-kindnet-20210813205926-393438 192.168.72.0/24 created
	I0813 21:16:28.235225  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:28.235145  439166 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:16:28.235245  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kindnet-20210813205926-393438 ...
	I0813 21:16:28.235278  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 21:16:28.235539  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0813 21:16:28.463858  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:28.463702  439166 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kindnet-20210813205926-393438/id_rsa...
	I0813 21:16:28.823982  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:28.823862  439166 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kindnet-20210813205926-393438/kindnet-20210813205926-393438.rawdisk...
	I0813 21:16:28.824024  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | Writing magic tar header
	I0813 21:16:28.824055  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | Writing SSH key tar header
	I0813 21:16:28.824073  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:28.823983  439166 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kindnet-20210813205926-393438 ...
	I0813 21:16:28.824287  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kindnet-20210813205926-393438
	I0813 21:16:28.824345  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines
	I0813 21:16:28.824382  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/kindnet-20210813205926-393438 (perms=drwx------)
	I0813 21:16:28.824398  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 21:16:28.824423  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337
	I0813 21:16:28.824435  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 21:16:28.824456  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines (perms=drwxr-xr-x)
	I0813 21:16:28.824474  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube (perms=drwxr-xr-x)
	I0813 21:16:28.824488  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337 (perms=drwxr-xr-x)
	I0813 21:16:28.824509  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 21:16:28.824526  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | Checking permissions on dir: /home/jenkins
	I0813 21:16:28.824536  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 21:16:28.824548  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Creating domain...
	I0813 21:16:28.824560  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | Checking permissions on dir: /home
	I0813 21:16:28.824569  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | Skipping /home - not owner
	I0813 21:16:28.919887  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | domain kindnet-20210813205926-393438 has defined MAC address 52:54:00:63:da:1e in network default
	I0813 21:16:28.920566  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Ensuring networks are active...
	I0813 21:16:28.920594  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | domain kindnet-20210813205926-393438 has defined MAC address 52:54:00:b4:85:d9 in network mk-kindnet-20210813205926-393438
	I0813 21:16:28.923163  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Ensuring network default is active
	I0813 21:16:28.923530  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Ensuring network mk-kindnet-20210813205926-393438 is active
	I0813 21:16:28.924181  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Getting domain xml...
	I0813 21:16:28.926414  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Creating domain...
	I0813 21:16:29.361224  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) Waiting to get IP...
	I0813 21:16:29.362003  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | domain kindnet-20210813205926-393438 has defined MAC address 52:54:00:b4:85:d9 in network mk-kindnet-20210813205926-393438
	I0813 21:16:29.362684  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | unable to find current IP address of domain kindnet-20210813205926-393438 in network mk-kindnet-20210813205926-393438
	I0813 21:16:29.362752  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:29.362645  439166 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 21:16:29.781348  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | domain kindnet-20210813205926-393438 has defined MAC address 52:54:00:b4:85:d9 in network mk-kindnet-20210813205926-393438
	I0813 21:16:29.781815  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | unable to find current IP address of domain kindnet-20210813205926-393438 in network mk-kindnet-20210813205926-393438
	I0813 21:16:29.781836  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:29.781784  439166 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 21:16:30.164359  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | domain kindnet-20210813205926-393438 has defined MAC address 52:54:00:b4:85:d9 in network mk-kindnet-20210813205926-393438
	I0813 21:16:30.164914  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | unable to find current IP address of domain kindnet-20210813205926-393438 in network mk-kindnet-20210813205926-393438
	I0813 21:16:30.164943  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:30.164865  439166 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 21:16:30.589592  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | domain kindnet-20210813205926-393438 has defined MAC address 52:54:00:b4:85:d9 in network mk-kindnet-20210813205926-393438
	I0813 21:16:30.590093  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | unable to find current IP address of domain kindnet-20210813205926-393438 in network mk-kindnet-20210813205926-393438
	I0813 21:16:30.590124  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:30.590050  439166 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 21:16:31.064245  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | domain kindnet-20210813205926-393438 has defined MAC address 52:54:00:b4:85:d9 in network mk-kindnet-20210813205926-393438
	I0813 21:16:31.064828  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | unable to find current IP address of domain kindnet-20210813205926-393438 in network mk-kindnet-20210813205926-393438
	I0813 21:16:31.064859  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:31.064775  439166 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 21:16:31.653524  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | domain kindnet-20210813205926-393438 has defined MAC address 52:54:00:b4:85:d9 in network mk-kindnet-20210813205926-393438
	I0813 21:16:31.654074  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | unable to find current IP address of domain kindnet-20210813205926-393438 in network mk-kindnet-20210813205926-393438
	I0813 21:16:31.654097  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:31.654012  439166 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 21:16:32.489269  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | domain kindnet-20210813205926-393438 has defined MAC address 52:54:00:b4:85:d9 in network mk-kindnet-20210813205926-393438
	I0813 21:16:32.489690  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | unable to find current IP address of domain kindnet-20210813205926-393438 in network mk-kindnet-20210813205926-393438
	I0813 21:16:32.489717  439142 main.go:130] libmachine: (kindnet-20210813205926-393438) DBG | I0813 21:16:32.489646  439166 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 21:16:30.822952  438411 pod_ready.go:102] pod "weave-net-24hqj" in "kube-system" namespace has status "Ready":"False"
	I0813 21:16:32.819452  438411 pod_ready.go:92] pod "weave-net-24hqj" in "kube-system" namespace has status "Ready":"True"
	I0813 21:16:32.819478  438411 pod_ready.go:81] duration metric: took 4.410655617s waiting for pod "weave-net-24hqj" in "kube-system" namespace to be "Ready" ...
	I0813 21:16:32.819487  438411 pod_ready.go:38] duration metric: took 9.451341041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 21:16:32.819506  438411 api_server.go:50] waiting for apiserver process to appear ...
	I0813 21:16:32.819552  438411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 21:16:32.836540  438411 api_server.go:70] duration metric: took 9.993279964s to wait for apiserver process to appear ...
	I0813 21:16:32.836565  438411 api_server.go:86] waiting for apiserver healthz status ...
	I0813 21:16:32.836578  438411 api_server.go:239] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0813 21:16:32.843265  438411 api_server.go:265] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0813 21:16:32.844635  438411 api_server.go:139] control plane version: v1.21.3
	I0813 21:16:32.844658  438411 api_server.go:129] duration metric: took 8.084799ms to wait for apiserver health ...
	I0813 21:16:32.844668  438411 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 21:16:32.852577  438411 system_pods.go:59] 8 kube-system pods found
	I0813 21:16:32.852609  438411 system_pods.go:61] "coredns-558bd4d5db-4brwp" [3cfca7ec-0126-4e48-8bab-bc373d1afabb] Running
	I0813 21:16:32.852617  438411 system_pods.go:61] "etcd-custom-weave-20210813205926-393438" [9a0a3586-6194-4a82-855a-9240fb9eb63e] Running
	I0813 21:16:32.852624  438411 system_pods.go:61] "kube-apiserver-custom-weave-20210813205926-393438" [0e6fdf79-7ece-4bc3-a598-1c8be1c9013d] Running
	I0813 21:16:32.852631  438411 system_pods.go:61] "kube-controller-manager-custom-weave-20210813205926-393438" [b67bf803-0a8a-40f9-a39e-2736f7747e40] Running
	I0813 21:16:32.852637  438411 system_pods.go:61] "kube-proxy-29b2s" [3d2a251b-09e2-48ff-b222-1d3db3aad089] Running
	I0813 21:16:32.852644  438411 system_pods.go:61] "kube-scheduler-custom-weave-20210813205926-393438" [cf2ed388-1952-43f1-91dd-545cbaa1dfe8] Running
	I0813 21:16:32.852652  438411 system_pods.go:61] "storage-provisioner" [1eacaf1d-9d54-497e-90ae-1008ba482b26] Running
	I0813 21:16:32.852658  438411 system_pods.go:61] "weave-net-24hqj" [2c64949f-41ec-4759-8c82-d8215d2f5eb6] Running
	I0813 21:16:32.852665  438411 system_pods.go:74] duration metric: took 7.989329ms to wait for pod list to return data ...
	I0813 21:16:32.852676  438411 default_sa.go:34] waiting for default service account to be created ...
	I0813 21:16:32.855836  438411 default_sa.go:45] found service account: "default"
	I0813 21:16:32.855863  438411 default_sa.go:55] duration metric: took 3.179222ms for default service account to be created ...
	I0813 21:16:32.855873  438411 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 21:16:32.862138  438411 system_pods.go:86] 8 kube-system pods found
	I0813 21:16:32.862187  438411 system_pods.go:89] "coredns-558bd4d5db-4brwp" [3cfca7ec-0126-4e48-8bab-bc373d1afabb] Running
	I0813 21:16:32.862197  438411 system_pods.go:89] "etcd-custom-weave-20210813205926-393438" [9a0a3586-6194-4a82-855a-9240fb9eb63e] Running
	I0813 21:16:32.862205  438411 system_pods.go:89] "kube-apiserver-custom-weave-20210813205926-393438" [0e6fdf79-7ece-4bc3-a598-1c8be1c9013d] Running
	I0813 21:16:32.862212  438411 system_pods.go:89] "kube-controller-manager-custom-weave-20210813205926-393438" [b67bf803-0a8a-40f9-a39e-2736f7747e40] Running
	I0813 21:16:32.862219  438411 system_pods.go:89] "kube-proxy-29b2s" [3d2a251b-09e2-48ff-b222-1d3db3aad089] Running
	I0813 21:16:32.862225  438411 system_pods.go:89] "kube-scheduler-custom-weave-20210813205926-393438" [cf2ed388-1952-43f1-91dd-545cbaa1dfe8] Running
	I0813 21:16:32.862231  438411 system_pods.go:89] "storage-provisioner" [1eacaf1d-9d54-497e-90ae-1008ba482b26] Running
	I0813 21:16:32.862237  438411 system_pods.go:89] "weave-net-24hqj" [2c64949f-41ec-4759-8c82-d8215d2f5eb6] Running
	I0813 21:16:32.862244  438411 system_pods.go:126] duration metric: took 6.359728ms to wait for k8s-apps to be running ...
	I0813 21:16:32.862252  438411 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 21:16:32.862301  438411 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 21:16:32.884490  438411 system_svc.go:56] duration metric: took 22.228687ms WaitForService to wait for kubelet.
	I0813 21:16:32.884520  438411 kubeadm.go:547] duration metric: took 10.04126735s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 21:16:32.884551  438411 node_conditions.go:102] verifying NodePressure condition ...
	I0813 21:16:32.890435  438411 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 21:16:32.890493  438411 node_conditions.go:123] node cpu capacity is 2
	I0813 21:16:32.890541  438411 node_conditions.go:105] duration metric: took 5.981979ms to run NodePressure ...
	I0813 21:16:32.890563  438411 start.go:231] waiting for startup goroutines ...
	I0813 21:16:32.951379  438411 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 21:16:32.953337  438411 out.go:177] * Done! kubectl is now configured to use "custom-weave-20210813205926-393438" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	7c3f653711073       cf9cba6c3e4a8       58 seconds ago       Running             kube-controller-manager   4                   88af86424b3ca
	c3024159112ae       b2462aa94d403       About a minute ago   Running             kube-apiserver            1                   7cf9902b15a51
	c6b1250932083       0048118155842       About a minute ago   Running             etcd                      1                   39b7a0897fe8b
	5084fc733d67e       7da2efaa5b480       About a minute ago   Running             kube-scheduler            1                   0ed4f38ed8ccc
	116848b32df56       cf9cba6c3e4a8       About a minute ago   Exited              kube-controller-manager   3                   88af86424b3ca
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 21:14:05 UTC, end at Fri 2021-08-13 21:16:33 UTC. --
	Aug 13 21:15:13 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:13.974940680Z" level=error msg="CreateContainer within sandbox \"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-078687633 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/35: file exists"
	Aug 13 21:15:14 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:14.269996621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-newest-cni-20210813211202-393438,Uid:23f20a6ab1f9bfbd5f9cb779fc57f4ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65\""
	Aug 13 21:15:14 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:14.279548640Z" level=info msg="CreateContainer within sandbox \"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
	Aug 13 21:15:14 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:14.328886490Z" level=info msg="CreateContainer within sandbox \"0ed4f38ed8cccb9513efab751242b39670c7c97deeaa489058da5828c073ac65\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1\""
	Aug 13 21:15:14 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:14.329779026Z" level=info msg="StartContainer for \"5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1\""
	Aug 13 21:15:14 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:14.760090018Z" level=info msg="StartContainer for \"5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1\" returns successfully"
	Aug 13 21:15:15 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:15.809292676Z" level=info msg="PullImage \"k8s.gcr.io/etcd:3.5.0-0\""
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.433043087Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/etcd:3.5.0-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.437361525Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.441026707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/etcd:3.5.0-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.444041365Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.444574191Z" level=info msg="PullImage \"k8s.gcr.io/etcd:3.5.0-0\" returns image reference \"sha256:0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba\""
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.449718035Z" level=info msg="CreateContainer within sandbox \"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e\" for container &ContainerMetadata{Name:etcd,Attempt:1,}"
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.723429988Z" level=info msg="CreateContainer within sandbox \"39b7a0897fe8b6ef1922184c5c5ef6cb53647a88a3a74460dc13a222597f9e8e\" for &ContainerMetadata{Name:etcd,Attempt:1,} returns container id \"c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6\""
	Aug 13 21:15:23 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:23.724398735Z" level=info msg="StartContainer for \"c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6\""
	Aug 13 21:15:24 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:24.140636481Z" level=info msg="StartContainer for \"c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6\" returns successfully"
	Aug 13 21:15:28 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:28.813753637Z" level=info msg="CreateContainer within sandbox \"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}"
	Aug 13 21:15:28 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:28.871547596Z" level=info msg="CreateContainer within sandbox \"7cf9902b15a5197de7de68e9fa14ac4d45d944327fc65ccca7cf6b424a87ee45\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847\""
	Aug 13 21:15:28 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:28.873257444Z" level=info msg="StartContainer for \"c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847\""
	Aug 13 21:15:29 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:29.418344557Z" level=info msg="StartContainer for \"c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847\" returns successfully"
	Aug 13 21:15:34 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:34.815030463Z" level=info msg="CreateContainer within sandbox \"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}"
	Aug 13 21:15:34 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:34.868296551Z" level=info msg="CreateContainer within sandbox \"88af86424b3ca42affa7521b91bb95478ed26a6921d5684639c1bf8e13d665d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f\""
	Aug 13 21:15:34 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:34.870434423Z" level=info msg="StartContainer for \"7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f\""
	Aug 13 21:15:35 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:35.700829030Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 13 21:15:35 newest-cni-20210813211202-393438 containerd[2161]: time="2021-08-13T21:15:35.772433833Z" level=info msg="StartContainer for \"7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug13 21:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.099111] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug13 21:14] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000029] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.657200] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.038046] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.036875] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1734 comm=systemd-network
	[  +0.934988] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.243361] vboxguest: loading out-of-tree module taints kernel.
	[  +0.011161] vboxguest: PCI device not found, probably running on physical hardware.
	[ +23.456214] systemd-fstab-generator[2074]: Ignoring "noauto" for root device
	[  +1.300348] systemd-fstab-generator[2107]: Ignoring "noauto" for root device
	[  +0.129987] systemd-fstab-generator[2120]: Ignoring "noauto" for root device
	[  +0.220069] systemd-fstab-generator[2150]: Ignoring "noauto" for root device
	[  +6.856447] systemd-fstab-generator[2346]: Ignoring "noauto" for root device
	[Aug13 21:15] systemd-fstab-generator[3104]: Ignoring "noauto" for root device
	[  +0.763030] systemd-fstab-generator[3159]: Ignoring "noauto" for root device
	[  +0.973563] systemd-fstab-generator[3212]: Ignoring "noauto" for root device
	[Aug13 21:16] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [c6b12509320836c11d4cc172f0db08195d2be935320896b44858fa34ee1a23b6] <==
	* {"level":"warn","ts":"2021-08-13T21:15:45.474Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"5.040887474s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.480Z","caller":"traceutil/trace.go:171","msg":"trace[1946565333] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:543; }","duration":"5.046178187s","start":"2021-08-13T21:15:40.433Z","end":"2021-08-13T21:15:45.479Z","steps":["trace[1946565333] 'range keys from in-memory index tree'  (duration: 5.04029265s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.482Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"4.241684637s","expected-duration":"100ms","prefix":"","request":"header:<ID:3642140244349460335 > lease_revoke:<id:328b7b415c780801>","response":"size:28"}
	{"level":"info","ts":"2021-08-13T21:15:45.483Z","caller":"traceutil/trace.go:171","msg":"trace[1988578343] linearizableReadLoop","detail":"{readStateIndex:567; appliedIndex:565; }","duration":"4.238399066s","start":"2021-08-13T21:15:41.244Z","end":"2021-08-13T21:15:45.483Z","steps":["trace[1988578343] 'read index received'  (duration: 3.268448854s)","trace[1988578343] 'applied index is now lower than readState.Index'  (duration: 969.93816ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-13T21:15:45.476Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"8.165339924s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:expand-controller\" ","response":"range_response_count:1 size:745"}
	{"level":"info","ts":"2021-08-13T21:15:45.484Z","caller":"traceutil/trace.go:171","msg":"trace[185560543] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:expand-controller; range_end:; response_count:1; response_revision:543; }","duration":"8.173511245s","start":"2021-08-13T21:15:37.310Z","end":"2021-08-13T21:15:45.484Z","steps":["trace[185560543] 'range keys from in-memory index tree'  (duration: 8.163686974s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.485Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:37.310Z","time spent":"8.174930389s","remote":"127.0.0.1:49096","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":768,"request content":"key:\"/registry/clusterrolebindings/system:controller:expand-controller\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.474Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.046803662s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.486Z","caller":"traceutil/trace.go:171","msg":"trace[1776376833] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"6.059156266s","start":"2021-08-13T21:15:39.427Z","end":"2021-08-13T21:15:45.486Z","steps":["trace[1776376833] 'range keys from in-memory index tree'  (duration: 6.045692216s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:39.427Z","time spent":"6.059219706s","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.475Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"8.130205905s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.489Z","caller":"traceutil/trace.go:171","msg":"trace[729064226] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"8.144792565s","start":"2021-08-13T21:15:37.344Z","end":"2021-08-13T21:15:45.489Z","steps":["trace[729064226] 'range keys from in-memory index tree'  (duration: 8.129174734s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.490Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:37.344Z","time spent":"8.145250502s","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.474Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"7.234318221s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.491Z","caller":"traceutil/trace.go:171","msg":"trace[245584525] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"7.252074411s","start":"2021-08-13T21:15:38.239Z","end":"2021-08-13T21:15:45.491Z","steps":["trace[245584525] 'range keys from in-memory index tree'  (duration: 7.234018007s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.493Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:38.239Z","time spent":"7.252420536s","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.487Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.861781651s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.493Z","caller":"traceutil/trace.go:171","msg":"trace[562982811] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"1.867724197s","start":"2021-08-13T21:15:43.625Z","end":"2021-08-13T21:15:45.493Z","steps":["trace[562982811] 'agreement among raft nodes before linearized reading'  (duration: 1.857923084s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.493Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:43.625Z","time spent":"1.867794956s","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.487Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"610.879228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.494Z","caller":"traceutil/trace.go:171","msg":"trace[44290579] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"617.154665ms","start":"2021-08-13T21:15:44.876Z","end":"2021-08-13T21:15:45.494Z","steps":["trace[44290579] 'agreement among raft nodes before linearized reading'  (duration: 610.838286ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.494Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:44.876Z","time spent":"617.216149ms","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-13T21:15:45.487Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.242852345s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-13T21:15:45.504Z","caller":"traceutil/trace.go:171","msg":"trace[1547725281] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"1.25977393s","start":"2021-08-13T21:15:44.245Z","end":"2021-08-13T21:15:45.504Z","steps":["trace[1547725281] 'agreement among raft nodes before linearized reading'  (duration: 1.242814194s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-13T21:15:45.504Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-13T21:15:44.245Z","time spent":"1.259841649s","remote":"127.0.0.1:49072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	
	* 
	* ==> kernel <==
	*  21:17:14 up 3 min,  0 users,  load average: 0.60, 0.53, 0.22
	Linux newest-cni-20210813211202-393438 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [c3024159112aecec2beac2d267279ad834137f8c469ff2a6271092f73bb9a847] <==
	* I0813 21:15:35.674966       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 21:15:35.679152       1 cache.go:39] Caches are synced for autoregister controller
	I0813 21:15:35.681323       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 21:15:36.216740       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 21:15:37.336621       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 21:15:37.336904       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	W0813 21:15:38.336697       1 handler_proxy.go:104] no RequestInfo found in the context
	E0813 21:15:38.337395       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 21:15:38.338267       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 21:15:45.495743       1 trace.go:205] Trace[2112370956]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:2beb930e-f810-4a6a-806e-0349d8fad1de,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 21:15:37.292) (total time: 8202ms):
	Trace[2112370956]: ---"About to write a response" 8201ms (21:15:45.494)
	Trace[2112370956]: [8.202675954s] [8.202675954s] END
	I0813 21:15:45.500044       1 trace.go:205] Trace[697574104]: "GuaranteedUpdate etcd3" type:*core.Event (13-Aug-2021 21:15:37.441) (total time: 8058ms):
	Trace[697574104]: ---"initial value restored" 8039ms (21:15:45.480)
	Trace[697574104]: [8.058757473s] [8.058757473s] END
	I0813 21:15:45.501899       1 trace.go:205] Trace[126321225]: "Patch" url:/api/v1/namespaces/default/events/newest-cni-20210813211202-393438.169afa2c97dd0b10,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:df70debe-3909-4bca-b25f-6b3f66d948f1,client:192.168.61.119,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 21:15:37.440) (total time: 8060ms):
	Trace[126321225]: ---"About to apply patch" 8039ms (21:15:45.480)
	Trace[126321225]: [8.060597478s] [8.060597478s] END
	I0813 21:15:46.161387       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 21:15:46.194245       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 21:15:46.317408       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 21:15:46.349236       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 21:15:46.369688       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 21:15:47.942756       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-controller-manager [116848b32df564f7527d30d663d98dc9754820ec664386dba0d7b2d25e56a787] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:151 +0x89
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).processNextWorkItem(0xc000829c00, 0x203000)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:263 +0x66
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).runWorker(...)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:258
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000286ed0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000286ed0, 0x5175ae0, 0xc000c9e570, 0x4c62101, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000286ed0, 0x3b9aca00, 0x0, 0x1, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000286ed0, 0x3b9aca00, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1d2
	
	goroutine 144 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000286ef0, 0x5175ae0, 0xc000c24690, 0x440a54e7e3c50701, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000286ef0, 0xdf8475800, 0x0, 0x9f0c8eb260a49901, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000286ef0, 0xdf8475800, 0xc00009a360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x24b
	
	* 
	* ==> kube-controller-manager [7c3f6537110738254de0cc7493d1348ee2db04ed7e66ab953a41fb4995c31a8f] <==
	* I0813 21:15:48.024004       1 cronjob_controllerv2.go:125] "Starting cronjob controller v2"
	I0813 21:15:48.024381       1 shared_informer.go:240] Waiting for caches to sync for cronjob
	I0813 21:15:48.035084       1 controllermanager.go:577] Started "endpointslicemirroring"
	I0813 21:15:48.039794       1 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
	I0813 21:15:48.040041       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0813 21:15:48.055058       1 controllermanager.go:577] Started "disruption"
	I0813 21:15:48.058328       1 disruption.go:363] Starting disruption controller
	I0813 21:15:48.058896       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0813 21:15:48.066149       1 controllermanager.go:577] Started "statefulset"
	I0813 21:15:48.066754       1 stateful_set.go:148] Starting stateful set controller
	I0813 21:15:48.067749       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0813 21:15:48.091305       1 controllermanager.go:577] Started "persistentvolume-binder"
	I0813 21:15:48.094568       1 pv_controller_base.go:308] Starting persistent volume controller
	I0813 21:15:48.095364       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0813 21:15:48.126077       1 attach_detach_controller.go:328] Starting attach detach controller
	I0813 21:15:48.126447       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0813 21:15:48.127715       1 controllermanager.go:577] Started "attachdetach"
	I0813 21:15:48.142252       1 controllermanager.go:577] Started "root-ca-cert-publisher"
	I0813 21:15:48.143766       1 publisher.go:107] Starting root CA certificate configmap publisher
	I0813 21:15:48.144043       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0813 21:15:48.153989       1 controllermanager.go:577] Started "csrcleaner"
	I0813 21:15:48.154785       1 cleaner.go:82] Starting CSR cleaner controller
	I0813 21:15:48.175698       1 controllermanager.go:577] Started "bootstrapsigner"
	I0813 21:15:48.176801       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0813 21:15:48.201227       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [5084fc733d67eff63506985d620e95a67b6c82bf7cd82576615fa1cbff2b1da1] <==
	* E0813 21:15:25.034371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.119:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.119:8443: connect: connection refused
	E0813 21:15:25.592124       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.119:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.119:8443: connect: connection refused
	E0813 21:15:25.605660       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.119:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.119:8443: connect: connection refused
	E0813 21:15:29.289283       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.119:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.119:8443: connect: connection refused
	E0813 21:15:35.317677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 21:15:35.321290       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 21:15:35.321394       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 21:15:35.321666       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 21:15:35.321940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 21:15:35.322055       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 21:15:35.322148       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 21:15:35.322266       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 21:15:35.322363       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 21:15:35.324710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 21:15:35.379529       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 21:15:35.442188       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 21:15:36.005200       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.007622       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.007978       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.008279       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.008607       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.008706       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.008892       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.008956       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0813 21:15:36.009242       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 21:14:05 UTC, end at Fri 2021-08-13 21:17:14 UTC. --
	Aug 13 21:15:47 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:47.554092    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:47 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:47.654864    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:47 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:47.755270    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:47 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:47.857346    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:47 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:47.957855    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.059374    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.163155    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.264042    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.364602    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.465299    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.566831    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.668662    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.769756    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.870447    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:48 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:48.971712    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.072617    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.173301    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.274100    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.375072    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.476060    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: E0813 21:15:49.576636    2354 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210813211202-393438\" not found"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 kubelet[2354]: I0813 21:15:49.663562    2354 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 13 21:15:49 newest-cni-20210813211202-393438 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 13 21:15:49 newest-cni-20210813211202-393438 systemd[1]: kubelet.service: Succeeded.
	Aug 13 21:15:49 newest-cni-20210813211202-393438 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 21:17:13.939560  439497 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (85.05s)

                                                
                                    

Test pass (232/269)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 10.72
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.06
10 TestDownloadOnly/v1.21.3/json-events 8.99
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.06
17 TestDownloadOnly/v1.22.0-rc.0/json-events 13.16
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.23
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
26 TestOffline 134.06
29 TestAddons/parallel/Registry 20.78
30 TestAddons/parallel/Ingress 41.84
31 TestAddons/parallel/MetricsServer 5.8
32 TestAddons/parallel/HelmTiller 12.26
33 TestAddons/parallel/Olm 65.06
35 TestAddons/parallel/GCPAuth 46.2
36 TestCertOptions 75.4
38 TestForceSystemdFlag 105.93
39 TestForceSystemdEnv 76.1
40 TestKVMDriverInstallOrUpdate 2.82
44 TestErrorSpam/setup 60.34
45 TestErrorSpam/start 0.42
46 TestErrorSpam/status 0.74
47 TestErrorSpam/pause 5.14
48 TestErrorSpam/unpause 1.71
49 TestErrorSpam/stop 5.26
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 124.27
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 26.21
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.2
60 TestFunctional/serial/CacheCmd/cache/add_remote 4.48
61 TestFunctional/serial/CacheCmd/cache/add_local 5.13
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
63 TestFunctional/serial/CacheCmd/cache/list 0.05
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.45
66 TestFunctional/serial/CacheCmd/cache/delete 0.11
67 TestFunctional/serial/MinikubeKubectlCmd 0.11
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
69 TestFunctional/serial/ExtraConfig 35.74
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1.33
72 TestFunctional/serial/LogsFileCmd 1.33
74 TestFunctional/parallel/ConfigCmd 0.34
75 TestFunctional/parallel/DashboardCmd 4.19
76 TestFunctional/parallel/DryRun 0.34
77 TestFunctional/parallel/InternationalLanguage 0.18
78 TestFunctional/parallel/StatusCmd 0.77
81 TestFunctional/parallel/ServiceCmd 13.42
82 TestFunctional/parallel/AddonsCmd 0.16
83 TestFunctional/parallel/PersistentVolumeClaim 38.39
85 TestFunctional/parallel/SSHCmd 0.45
86 TestFunctional/parallel/CpCmd 0.45
87 TestFunctional/parallel/MySQL 28.53
88 TestFunctional/parallel/FileSync 0.23
89 TestFunctional/parallel/CertSync 1.91
93 TestFunctional/parallel/NodeLabels 0.1
94 TestFunctional/parallel/LoadImage 3
95 TestFunctional/parallel/RemoveImage 3.44
96 TestFunctional/parallel/LoadImageFromFile 2.18
97 TestFunctional/parallel/BuildImage 5.36
98 TestFunctional/parallel/ListImages 0.22
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
101 TestFunctional/parallel/Version/short 0.07
102 TestFunctional/parallel/Version/components 1.36
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
107 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
110 TestFunctional/parallel/ProfileCmd/profile_list 0.7
111 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
112 TestFunctional/parallel/MountCmd/any-port 7.98
113 TestFunctional/parallel/MountCmd/specific-port 1.8
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
120 TestFunctional/delete_busybox_image 0.08
121 TestFunctional/delete_my-image_image 0.04
122 TestFunctional/delete_minikube_cached_images 0.04
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.32
148 TestMainNoArgs 0.05
151 TestMultiNode/serial/FreshStart2Nodes 147.78
152 TestMultiNode/serial/DeployApp2Nodes 9.76
153 TestMultiNode/serial/PingHostFrom2Pods 1.03
154 TestMultiNode/serial/AddNode 53.09
155 TestMultiNode/serial/ProfileList 0.27
156 TestMultiNode/serial/CopyFile 1.8
157 TestMultiNode/serial/StopNode 2.91
158 TestMultiNode/serial/StartAfterStop 70.54
159 TestMultiNode/serial/RestartKeepsNodes 552.81
160 TestMultiNode/serial/DeleteNode 2.18
161 TestMultiNode/serial/StopMultiNode 184.36
162 TestMultiNode/serial/RestartMultiNode 237.15
163 TestMultiNode/serial/ValidateNameConflict 61.48
169 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
170 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 11.1
172 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
173 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 9.82
175 TestDebPackageInstall/install_amd64_debian:10/minikube 0
176 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 9.56
178 TestDebPackageInstall/install_amd64_debian:9/minikube 0
179 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.26
181 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
182 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 14.05
184 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
185 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 13.4
187 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
188 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 13.89
190 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
191 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 12.7
192 TestPreload 182.93
194 TestScheduledStopUnix 99.89
198 TestRunningBinaryUpgrade 244.83
200 TestKubernetesUpgrade 226.55
203 TestPause/serial/Start 153.2
211 TestPause/serial/SecondStartNoReconfiguration 34.66
213 TestStoppedBinaryUpgrade/MinikubeLogs 1.51
215 TestPause/serial/Unpause 1.51
224 TestNetworkPlugins/group/false 0.6
228 TestPause/serial/DeletePaused 0.8
229 TestPause/serial/VerifyDeletedResources 0.44
231 TestStartStop/group/old-k8s-version/serial/FirstStart 171.68
233 TestStartStop/group/no-preload/serial/FirstStart 151.78
235 TestStartStop/group/embed-certs/serial/FirstStart 125.32
237 TestStartStop/group/default-k8s-different-port/serial/FirstStart 107.13
238 TestStartStop/group/old-k8s-version/serial/DeployApp 8.75
239 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
240 TestStartStop/group/old-k8s-version/serial/Stop 92.49
241 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.73
242 TestStartStop/group/no-preload/serial/DeployApp 10.57
243 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 1.17
244 TestStartStop/group/default-k8s-different-port/serial/Stop 92.48
245 TestStartStop/group/embed-certs/serial/DeployApp 8.65
246 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
247 TestStartStop/group/no-preload/serial/Stop 93.48
248 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
249 TestStartStop/group/embed-certs/serial/Stop 92.48
250 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
251 TestStartStop/group/old-k8s-version/serial/SecondStart 477.02
252 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.17
253 TestStartStop/group/default-k8s-different-port/serial/SecondStart 428.73
254 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
255 TestStartStop/group/no-preload/serial/SecondStart 400.54
256 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
257 TestStartStop/group/embed-certs/serial/SecondStart 506.63
258 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
259 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
260 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
262 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 8.06
264 TestStartStop/group/newest-cni/serial/FirstStart 105.87
265 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.12
266 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.27
268 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
269 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 10.97
270 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
272 TestNetworkPlugins/group/auto/Start 126.36
273 TestNetworkPlugins/group/cilium/Start 201.61
274 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.45
275 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.97
276 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
278 TestStartStop/group/newest-cni/serial/DeployApp 0
279 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.29
280 TestStartStop/group/newest-cni/serial/Stop 4.14
281 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.3
282 TestStartStop/group/newest-cni/serial/SecondStart 114.88
283 TestNetworkPlugins/group/calico/Start 118.17
284 TestNetworkPlugins/group/auto/KubeletFlags 0.31
285 TestNetworkPlugins/group/auto/NetCatPod 13.87
286 TestNetworkPlugins/group/auto/DNS 0.35
287 TestNetworkPlugins/group/auto/Localhost 0.25
288 TestNetworkPlugins/group/auto/HairPin 0.22
289 TestNetworkPlugins/group/custom-weave/Start 88.53
290 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
291 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
292 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
294 TestNetworkPlugins/group/calico/ControllerPod 5.03
295 TestNetworkPlugins/group/cilium/ControllerPod 5.03
296 TestNetworkPlugins/group/calico/KubeletFlags 0.22
297 TestNetworkPlugins/group/calico/NetCatPod 10.64
298 TestNetworkPlugins/group/cilium/KubeletFlags 0.25
299 TestNetworkPlugins/group/cilium/NetCatPod 12.58
300 TestNetworkPlugins/group/calico/DNS 0.37
301 TestNetworkPlugins/group/calico/Localhost 0.21
302 TestNetworkPlugins/group/calico/HairPin 0.29
303 TestNetworkPlugins/group/kindnet/Start 116.17
304 TestNetworkPlugins/group/cilium/DNS 0.35
305 TestNetworkPlugins/group/cilium/Localhost 0.31
306 TestNetworkPlugins/group/cilium/HairPin 0.26
307 TestNetworkPlugins/group/flannel/Start 117.14
308 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.23
309 TestNetworkPlugins/group/custom-weave/NetCatPod 10.62
310 TestNetworkPlugins/group/enable-default-cni/Start 147.32
311 TestNetworkPlugins/group/bridge/Start 107.12
312 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
313 TestNetworkPlugins/group/flannel/ControllerPod 5.03
314 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
315 TestNetworkPlugins/group/kindnet/NetCatPod 10.59
316 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
317 TestNetworkPlugins/group/flannel/NetCatPod 11.7
318 TestNetworkPlugins/group/kindnet/DNS 0.25
319 TestNetworkPlugins/group/kindnet/Localhost 0.21
320 TestNetworkPlugins/group/kindnet/HairPin 0.22
321 TestNetworkPlugins/group/flannel/DNS 0.22
322 TestNetworkPlugins/group/flannel/Localhost 0.2
323 TestNetworkPlugins/group/flannel/HairPin 0.22
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
325 TestNetworkPlugins/group/bridge/NetCatPod 11.49
326 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
327 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.46
328 TestNetworkPlugins/group/bridge/DNS 0.22
329 TestNetworkPlugins/group/bridge/Localhost 0.22
330 TestNetworkPlugins/group/bridge/HairPin 0.19
331 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
332 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
333 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
x
+
TestDownloadOnly/v1.14.0/json-events (10.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200751-393438 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200751-393438 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (10.71754376s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (10.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200751-393438
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200751-393438: exit status 85 (62.819138ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:07:51
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:07:51.077467  393450 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:07:51.077798  393450 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:07:51.077812  393450 out.go:311] Setting ErrFile to fd 2...
	I0813 20:07:51.077817  393450 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:07:51.078089  393450 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:07:51.078300  393450 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:07:51.078742  393450 out.go:305] Setting JSON to true
	I0813 20:07:51.113337  393450 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":3033,"bootTime":1628882238,"procs":139,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:07:51.113420  393450 start.go:121] virtualization: kvm guest
	I0813 20:07:51.116226  393450 notify.go:169] Checking for updates...
	I0813 20:07:51.118061  393450 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:07:51.146335  393450 start.go:278] selected driver: kvm2
	I0813 20:07:51.146352  393450 start.go:751] validating driver "kvm2" against <nil>
	I0813 20:07:51.146992  393450 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:07:51.147115  393450 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:07:51.158519  393450 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:07:51.158562  393450 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 20:07:51.158997  393450 start_flags.go:344] Using suggested 6000MB memory alloc based on sys=32179MB, container=0MB
	I0813 20:07:51.159091  393450 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 20:07:51.159126  393450 cni.go:93] Creating CNI manager for ""
	I0813 20:07:51.159133  393450 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:07:51.159142  393450 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 20:07:51.159147  393450 start_flags.go:277] config:
	{Name:download-only-20210813200751-393438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210813200751-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:07:51.159335  393450 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:07:51.161125  393450 download.go:92] Downloading: https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0813 20:07:53.276510  393450 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0813 20:07:53.319545  393450 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:07:53.319581  393450 cache.go:56] Caching tarball of preloaded images
	I0813 20:07:53.319737  393450 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0813 20:07:53.321471  393450 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I0813 20:07:53.382178  393450 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:8891d3d5a9795ff90493434142d1724b -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:07:59.395985  393450 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I0813 20:07:59.396070  393450 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200751-393438"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (8.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200751-393438 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200751-393438 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (8.990645904s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (8.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200751-393438
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200751-393438: exit status 85 (64.632455ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:08:01
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:08:01.856181  393486 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:08:01.856248  393486 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:01.856275  393486 out.go:311] Setting ErrFile to fd 2...
	I0813 20:08:01.856279  393486 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:01.856402  393486 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:08:01.856529  393486 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:08:01.856654  393486 out.go:305] Setting JSON to true
	I0813 20:08:01.891611  393486 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":3044,"bootTime":1628882238,"procs":139,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:08:01.891722  393486 start.go:121] virtualization: kvm guest
	I0813 20:08:01.895082  393486 notify.go:169] Checking for updates...
	I0813 20:08:01.897351  393486 config.go:177] Loaded profile config "download-only-20210813200751-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	W0813 20:08:01.897398  393486 start.go:659] api.Load failed for download-only-20210813200751-393438: filestore "download-only-20210813200751-393438": Docker machine "download-only-20210813200751-393438" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:01.897437  393486 driver.go:335] Setting default libvirt URI to qemu:///system
	W0813 20:08:01.897471  393486 start.go:659] api.Load failed for download-only-20210813200751-393438: filestore "download-only-20210813200751-393438": Docker machine "download-only-20210813200751-393438" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:01.925928  393486 start.go:278] selected driver: kvm2
	I0813 20:08:01.925946  393486 start.go:751] validating driver "kvm2" against &{Name:download-only-20210813200751-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.14.0 ClusterName:download-only-20210813200751-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:01.926758  393486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:08:01.926939  393486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:08:01.937543  393486 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:08:01.938189  393486 cni.go:93] Creating CNI manager for ""
	I0813 20:08:01.938205  393486 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:08:01.938213  393486 start_flags.go:277] config:
	{Name:download-only-20210813200751-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210813200751-393438 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:01.938313  393486 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:08:01.939904  393486 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:08:01.989923  393486 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0813 20:08:01.989946  393486 cache.go:56] Caching tarball of preloaded images
	I0813 20:08:01.990070  393486 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 20:08:01.991909  393486 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 ...
	I0813 20:08:02.050749  393486 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:6ee74ddc722ac9485c71891d6e62193d -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200751-393438"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (13.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200751-393438 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210813200751-393438 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (13.158519579s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (13.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210813200751-393438
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210813200751-393438: exit status 85 (61.417622ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 20:08:10
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 20:08:10.914378  393524 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:08:10.914458  393524 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:10.914466  393524 out.go:311] Setting ErrFile to fd 2...
	I0813 20:08:10.914469  393524 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:08:10.914578  393524 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	W0813 20:08:10.914687  393524 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/config/config.json: no such file or directory
	I0813 20:08:10.914793  393524 out.go:305] Setting JSON to true
	I0813 20:08:10.949202  393524 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":3053,"bootTime":1628882238,"procs":136,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:08:10.949268  393524 start.go:121] virtualization: kvm guest
	I0813 20:08:11.104294  393524 notify.go:169] Checking for updates...
	I0813 20:08:11.108542  393524 config.go:177] Loaded profile config "download-only-20210813200751-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	W0813 20:08:11.108596  393524 start.go:659] api.Load failed for download-only-20210813200751-393438: filestore "download-only-20210813200751-393438": Docker machine "download-only-20210813200751-393438" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:11.108667  393524 driver.go:335] Setting default libvirt URI to qemu:///system
	W0813 20:08:11.108700  393524 start.go:659] api.Load failed for download-only-20210813200751-393438: filestore "download-only-20210813200751-393438": Docker machine "download-only-20210813200751-393438" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 20:08:11.138748  393524 start.go:278] selected driver: kvm2
	I0813 20:08:11.138768  393524 start.go:751] validating driver "kvm2" against &{Name:download-only-20210813200751-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.21.3 ClusterName:download-only-20210813200751-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:11.139689  393524 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:08:11.139857  393524 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 20:08:11.150566  393524 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 20:08:11.151295  393524 cni.go:93] Creating CNI manager for ""
	I0813 20:08:11.151309  393524 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0813 20:08:11.151317  393524 start_flags.go:277] config:
	{Name:download-only-20210813200751-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210813200751-393438 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:08:11.151417  393524 iso.go:123] acquiring lock: {Name:mkbb42d4fa68811cd256644294b190331263ca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 20:08:11.153179  393524 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:08:11.199722  393524 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:08:11.199752  393524 cache.go:56] Caching tarball of preloaded images
	I0813 20:08:11.199908  393524 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 20:08:11.201592  393524 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0813 20:08:11.269203  393524 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:569167d620e883cc7aa194927ed83d26 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0813 20:08:20.150090  393524 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0813 20:08:20.150183  393524 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813200751-393438"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210813200751-393438
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestOffline (134.06s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20210813205520-393438 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20210813205520-393438 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m12.945920893s)
helpers_test.go:176: Cleaning up "offline-containerd-20210813205520-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20210813205520-393438
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20210813205520-393438: (1.114705779s)
--- PASS: TestOffline (134.06s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 18.941292ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-5svq5" [e46e7721-b1fa-4ead-b2e2-c7e63d60991a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.025562065s
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-tqczv" [753222c4-f8c8-4036-aaaf-76542b374291] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009845082s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210813200824-393438 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210813200824-393438 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Done: kubectl --context addons-20210813200824-393438 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.884347962s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200824-393438 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200824-393438 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (41.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:343: "ingress-nginx-admission-create-wrrnf" [c8371253-b71e-4ecb-a555-aa4eaa493c9d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 9.127056ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210813200824-393438 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210813200824-393438 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [dea5d207-6be9-447d-8a10-85cd1317b242] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [dea5d207-6be9-447d-8a10-85cd1317b242] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.020977557s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200824-393438 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210813200824-393438 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200824-393438 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200824-393438 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200824-393438 addons disable ingress --alsologtostderr -v=1: (29.794597811s)
--- PASS: TestAddons/parallel/Ingress (41.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 19.707285ms
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-77c99ccb96-9567z" [f3e2b887-96a1-4798-862b-2cce165f6e71] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.019795131s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210813200824-393438 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200824-393438 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.80s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.26s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: tiller-deploy stabilized in 6.589617ms
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-768d69497-dhd7f" [b66f31a9-eec4-4952-a7d1-5e6338a26140] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.042531822s
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210813200824-393438 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210813200824-393438 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (6.380661208s)
addons_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200824-393438 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.26s)

                                                
                                    
x
+
TestAddons/parallel/Olm (65.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 20.351954ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:467: olm-operator stabilized in 24.604599ms
addons_test.go:471: packageserver stabilized in 28.625523ms
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "catalog-operator-75d496484d-x887b" [54471b6a-4dbf-4cb7-b941-810f09692f7f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.015762071s
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "olm-operator-859c88c96-7j2mx" [67469ca9-f438-4fe4-961f-0d6dea50de63] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.011537991s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:343: "packageserver-6f5dccfdd7-4g764" [b3e8eac4-b915-4659-b127-902031bddc3a] Running
helpers_test.go:343: "packageserver-6f5dccfdd7-tl766" [a7a50254-1436-4d5e-9bca-854216c55228] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6f5dccfdd7-4g764" [b3e8eac4-b915-4659-b127-902031bddc3a] Running
helpers_test.go:343: "packageserver-6f5dccfdd7-tl766" [a7a50254-1436-4d5e-9bca-854216c55228] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6f5dccfdd7-4g764" [b3e8eac4-b915-4659-b127-902031bddc3a] Running
helpers_test.go:343: "packageserver-6f5dccfdd7-tl766" [a7a50254-1436-4d5e-9bca-854216c55228] Running
helpers_test.go:343: "packageserver-6f5dccfdd7-4g764" [b3e8eac4-b915-4659-b127-902031bddc3a] Running
helpers_test.go:343: "packageserver-6f5dccfdd7-tl766" [a7a50254-1436-4d5e-9bca-854216c55228] Running
helpers_test.go:343: "packageserver-6f5dccfdd7-4g764" [b3e8eac4-b915-4659-b127-902031bddc3a] Running
helpers_test.go:343: "packageserver-6f5dccfdd7-tl766" [a7a50254-1436-4d5e-9bca-854216c55228] Running
helpers_test.go:343: "packageserver-6f5dccfdd7-4g764" [b3e8eac4-b915-4659-b127-902031bddc3a] Running
addons_test.go:479: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.009840149s
addons_test.go:482: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:343: "operatorhubio-catalog-n7288" [4f910284-1cf2-4c65-9ea8-541ef54d7304] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.00886283s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210813200824-393438 create -f testdata/etcd.yaml
2021/08/13 20:11:49 [DEBUG] GET http://192.168.39.71:5000

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200824-393438 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200824-393438 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200824-393438 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200824-393438 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200824-393438 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200824-393438 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200824-393438 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210813200824-393438 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200824-393438 get csv -n my-etcd
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210813200824-393438 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (65.06s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (46.2s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210813200824-393438 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [8598ffc4-7138-4072-9e16-6d4482f4c672] Pending
helpers_test.go:343: "busybox" [8598ffc4-7138-4072-9e16-6d4482f4c672] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [8598ffc4-7138-4072-9e16-6d4482f4c672] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 8.012211355s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210813200824-393438 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210813200824-393438 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:683: (dbg) Run:  kubectl --context addons-20210813200824-393438 apply -f testdata/private-image.yaml
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-qcb6d" [48f9b193-0daa-4acc-bdf3-6294a0fd5e6e] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-qcb6d" [48f9b193-0daa-4acc-bdf3-6294a0fd5e6e] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 15.018002035s
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210813200824-393438 apply -f testdata/private-image-eu.yaml
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-5956d58f9f-85ktb" [9dc5daad-6194-45f7-b5de-f1707d411110] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-85ktb" [9dc5daad-6194-45f7-b5de-f1707d411110] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 10.017676986s
addons_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210813200824-393438 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:709: (dbg) Done: out/minikube-linux-amd64 -p addons-20210813200824-393438 addons disable gcp-auth --alsologtostderr -v=1: (11.767838518s)
--- PASS: TestAddons/parallel/GCPAuth (46.20s)

                                                
                                    
x
+
TestCertOptions (75.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210813205929-393438 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210813205929-393438 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m14.156004016s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210813205929-393438 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210813205929-393438 config view
helpers_test.go:176: Cleaning up "cert-options-20210813205929-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210813205929-393438
--- PASS: TestCertOptions (75.40s)

                                                
                                    
x
+
TestForceSystemdFlag (105.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210813205929-393438 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210813205929-393438 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m43.948106799s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20210813205929-393438 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-20210813205929-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210813205929-393438
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210813205929-393438: (1.582139208s)
--- PASS: TestForceSystemdFlag (105.93s)

                                                
                                    
x
+
TestForceSystemdEnv (76.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210813205836-393438 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210813205836-393438 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m14.90516201s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20210813205836-393438 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-20210813205836-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210813205836-393438
--- PASS: TestForceSystemdEnv (76.10s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.82s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.82s)

                                                
                                    
x
+
TestErrorSpam/setup (60.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210813201942-393438 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813201942-393438 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210813201942-393438 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813201942-393438 --driver=kvm2  --container-runtime=containerd: (1m0.344208869s)
--- PASS: TestErrorSpam/setup (60.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 start --dry-run
--- PASS: TestErrorSpam/start (0.42s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (5.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 pause: exit status 80 (2.701939467s)

                                                
                                                
-- stdout --
	* Pausing node nospam-20210813201942-393438 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 02fdc0712966ff65cd60ea732db4f6738687fd3e8d35145877590f9746d2e9b5 423f07c7324c70919132f496f0a87489fb8d1357135a2cd258bf96529655b925: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-13T20:20:46Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭───────────────────────────────────────────────────────────────────────────────╮
	│                                                                               │
	│    * If the above advice does not help, please let us know:                   │
	│      https://github.com/kubernetes/minikube/issues/new/choose                 │
	│                                                                               │
	│    * Please attach the following file to the GitHub issue:                    │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                               │
	╰───────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 pause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 pause: (1.953087272s)
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 pause
--- PASS: TestErrorSpam/pause (5.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 unpause
--- PASS: TestErrorSpam/unpause (1.71s)

                                                
                                    
x
+
TestErrorSpam/stop (5.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 stop: (5.1220341s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813201942-393438 --log_dir /tmp/nospam-20210813201942-393438 stop
--- PASS: TestErrorSpam/stop (5.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/files/etc/test/nested/copy/393438/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (124.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813202056-393438 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0813 20:21:29.730257  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:29.736223  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:29.746438  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:29.766682  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:29.806974  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:29.887269  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:30.047740  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:30.368329  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:31.008468  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:33.621488  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:36.182387  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:41.302991  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:21:51.544072  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:22:12.024638  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:22:52.986089  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
functional_test.go:1982: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813202056-393438 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (2m4.268220658s)
--- PASS: TestFunctional/serial/StartWithProxy (124.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813202056-393438 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813202056-393438 --alsologtostderr -v=8: (26.209805256s)
functional_test.go:631: soft start took 26.210662607s for "functional-20210813202056-393438" cluster.
--- PASS: TestFunctional/serial/SoftStart (26.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210813202056-393438 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 cache add k8s.gcr.io/pause:3.1: (1.028741378s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 cache add k8s.gcr.io/pause:3.3: (1.739634686s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 cache add k8s.gcr.io/pause:latest: (1.713277758s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (5.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210813202056-393438 /tmp/functional-20210813202056-393438473942794
functional_test.go:1012: (dbg) Done: docker build -t minikube-local-cache-test:functional-20210813202056-393438 /tmp/functional-20210813202056-393438473942794: (3.794184773s)
functional_test.go:1024: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 cache add minikube-local-cache-test:functional-20210813202056-393438
functional_test.go:1024: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 cache add minikube-local-cache-test:functional-20210813202056-393438: (1.23547548s)
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 cache delete minikube-local-cache-test:functional-20210813202056-393438
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210813202056-393438
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (5.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (222.779617ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 cache reload: (1.763578699s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 kubectl -- --context functional-20210813202056-393438 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210813202056-393438 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813202056-393438 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0813 20:24:14.906775  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
functional_test.go:715: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813202056-393438 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.740646496s)
functional_test.go:719: restart took 35.740752262s for "functional-20210813202056-393438" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210813202056-393438 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 logs
functional_test.go:1165: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 logs: (1.327023163s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 logs --file /tmp/functional-20210813202056-393438474592993/logs.txt
functional_test.go:1181: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 logs --file /tmp/functional-20210813202056-393438474592993/logs.txt: (1.328895355s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813202056-393438 config get cpus: exit status 14 (54.128558ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 config unset cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813202056-393438 config get cpus: exit status 14 (52.840217ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210813202056-393438 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:862: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210813202056-393438 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:507: unable to kill pid 398866: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813202056-393438 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210813202056-393438 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (170.329091ms)

                                                
                                                
-- stdout --
	* [functional-20210813202056-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:24:46.218417  398594 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:24:46.218517  398594 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:24:46.218527  398594 out.go:311] Setting ErrFile to fd 2...
	I0813 20:24:46.218532  398594 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:24:46.218637  398594 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:24:46.218875  398594 out.go:305] Setting JSON to false
	I0813 20:24:46.257836  398594 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":4049,"bootTime":1628882238,"procs":180,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:24:46.257944  398594 start.go:121] virtualization: kvm guest
	I0813 20:24:46.261148  398594 out.go:177] * [functional-20210813202056-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:24:46.263064  398594 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:24:46.264564  398594 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:24:46.265899  398594 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:24:46.267267  398594 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:24:46.267705  398594 config.go:177] Loaded profile config "functional-20210813202056-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:24:46.268091  398594 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:24:46.268146  398594 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:24:46.280106  398594 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33971
	I0813 20:24:46.280585  398594 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:24:46.281165  398594 main.go:130] libmachine: Using API Version  1
	I0813 20:24:46.281186  398594 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:24:46.281665  398594 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:24:46.281862  398594 main.go:130] libmachine: (functional-20210813202056-393438) Calling .DriverName
	I0813 20:24:46.282056  398594 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:24:46.282516  398594 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:24:46.282559  398594 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:24:46.293924  398594 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0813 20:24:46.294429  398594 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:24:46.294975  398594 main.go:130] libmachine: Using API Version  1
	I0813 20:24:46.295001  398594 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:24:46.295373  398594 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:24:46.295548  398594 main.go:130] libmachine: (functional-20210813202056-393438) Calling .DriverName
	I0813 20:24:46.330217  398594 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 20:24:46.330246  398594 start.go:278] selected driver: kvm2
	I0813 20:24:46.330256  398594 start.go:751] validating driver "kvm2" against &{Name:functional-20210813202056-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.21.3 ClusterName:functional-20210813202056-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-
policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:24:46.330400  398594 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:24:46.334173  398594 out.go:177] 
	W0813 20:24:46.334291  398594 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0813 20:24:46.335631  398594 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813202056-393438 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813202056-393438 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210813202056-393438 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (175.098701ms)

                                                
                                                
-- stdout --
	* [functional-20210813202056-393438] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:24:46.569635  398702 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:24:46.569711  398702 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:24:46.569719  398702 out.go:311] Setting ErrFile to fd 2...
	I0813 20:24:46.569722  398702 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:24:46.569853  398702 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:24:46.570043  398702 out.go:305] Setting JSON to false
	I0813 20:24:46.615063  398702 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":4049,"bootTime":1628882238,"procs":191,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:24:46.615173  398702 start.go:121] virtualization: kvm guest
	I0813 20:24:46.617616  398702 out.go:177] * [functional-20210813202056-393438] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	I0813 20:24:46.619182  398702 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:24:46.620653  398702 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:24:46.622124  398702 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:24:46.623809  398702 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:24:46.624339  398702 config.go:177] Loaded profile config "functional-20210813202056-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:24:46.624899  398702 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:24:46.624963  398702 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:24:46.635755  398702 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36083
	I0813 20:24:46.636196  398702 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:24:46.636754  398702 main.go:130] libmachine: Using API Version  1
	I0813 20:24:46.636775  398702 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:24:46.637150  398702 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:24:46.637348  398702 main.go:130] libmachine: (functional-20210813202056-393438) Calling .DriverName
	I0813 20:24:46.637526  398702 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:24:46.637880  398702 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:24:46.637920  398702 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:24:46.648649  398702 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42323
	I0813 20:24:46.649088  398702 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:24:46.649574  398702 main.go:130] libmachine: Using API Version  1
	I0813 20:24:46.649595  398702 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:24:46.649928  398702 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:24:46.650108  398702 main.go:130] libmachine: (functional-20210813202056-393438) Calling .DriverName
	I0813 20:24:46.677938  398702 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0813 20:24:46.677961  398702 start.go:278] selected driver: kvm2
	I0813 20:24:46.677967  398702 start.go:751] validating driver "kvm2" against &{Name:functional-20210813202056-393438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.21.3 ClusterName:functional-20210813202056-393438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-
policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 20:24:46.678114  398702 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:24:46.680392  398702 out.go:177] 
	W0813 20:24:46.680510  398702 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0813 20:24:46.681866  398702 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 status
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:826: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210813202056-393438 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210813202056-393438 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-j2ktl" [04945dce-70b0-433f-9eed-ab9b03583aad] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-j2ktl" [04945dce-70b0-433f-9eed-ab9b03583aad] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.196241224s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 service list
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 service --namespace=default --https --url hello-node
functional_test.go:1394: found endpoint: https://192.168.39.96:30897
functional_test.go:1405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 service hello-node --url --format={{.IP}}
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 service hello-node --url
functional_test.go:1420: found endpoint for hello-node: http://192.168.39.96:30897
functional_test.go:1431: Attempting to fetch http://192.168.39.96:30897 ...
functional_test.go:1450: http://192.168.39.96:30897: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-j2ktl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.96:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.96:30897
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (13.42s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [f0f4d7c9-4007-4a16-a2cd-3c742a7df3a0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014243313s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210813202056-393438 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210813202056-393438 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210813202056-393438 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210813202056-393438 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210813202056-393438 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [25925633-d36c-4aaa-a459-b9febc09d95e] Pending
helpers_test.go:343: "sp-pod" [25925633-d36c-4aaa-a459-b9febc09d95e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [25925633-d36c-4aaa-a459-b9febc09d95e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.020278273s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210813202056-393438 exec sp-pod -- touch /tmp/mount/foo
2021/08/13 20:24:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210813202056-393438 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210813202056-393438 delete -f testdata/storage-provisioner/pod.yaml: (4.307649487s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210813202056-393438 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [8c0e4df4-b33f-4c79-a906-ba77595adb62] Pending
helpers_test.go:343: "sp-pod" [8c0e4df4-b33f-4c79-a906-ba77595adb62] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [8c0e4df4-b33f-4c79-a906-ba77595adb62] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.011476341s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210813202056-393438 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.39s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "echo hello"
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210813202056-393438 replace --force -f testdata/mysql.yaml
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-n4w75" [8dda897a-548b-4b69-bd76-1881b3f11d45] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-n4w75" [8dda897a-548b-4b69-bd76-1881b3f11d45] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.013742948s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813202056-393438 exec mysql-9bbbc5bbb-n4w75 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813202056-393438 exec mysql-9bbbc5bbb-n4w75 -- mysql -ppassword -e "show databases;": exit status 1 (281.755059ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813202056-393438 exec mysql-9bbbc5bbb-n4w75 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813202056-393438 exec mysql-9bbbc5bbb-n4w75 -- mysql -ppassword -e "show databases;": exit status 1 (335.255263ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813202056-393438 exec mysql-9bbbc5bbb-n4w75 -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813202056-393438 exec mysql-9bbbc5bbb-n4w75 -- mysql -ppassword -e "show databases;": exit status 1 (389.856043ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813202056-393438 exec mysql-9bbbc5bbb-n4w75 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.53s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/393438/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo cat /etc/test/nested/copy/393438/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/393438.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo cat /etc/ssl/certs/393438.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/393438.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo cat /usr/share/ca-certificates/393438.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3934382.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo cat /etc/ssl/certs/3934382.pem"
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/3934382.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo cat /usr/share/ca-certificates/3934382.pem"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210813202056-393438 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210813202056-393438

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 image load docker.io/library/busybox:load-functional-20210813202056-393438

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 image load docker.io/library/busybox:load-functional-20210813202056-393438: (2.076115652s)
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813202056-393438 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210813202056-393438
--- PASS: TestFunctional/parallel/LoadImage (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210813202056-393438

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 image load docker.io/library/busybox:remove-functional-20210813202056-393438

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 image load docker.io/library/busybox:remove-functional-20210813202056-393438: (1.977051145s)
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 image rm docker.io/library/busybox:remove-functional-20210813202056-393438

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813202056-393438 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210813202056-393438
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210813202056-393438
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/busybox.tar
functional_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/busybox.tar: (1.240795264s)
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813202056-393438 -- sudo crictl images
--- PASS: TestFunctional/parallel/LoadImageFromFile (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (5.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 image build -t localhost/my-image:functional-20210813202056-393438 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 image build -t localhost/my-image:functional-20210813202056-393438 testdata/build: (5.061772536s)
functional_test.go:415: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210813202056-393438 image build -t localhost/my-image:functional-20210813202056-393438 testdata/build:
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 77B done
#1 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 0.9s

                                                
                                                
#6 [internal] load build context
#6 transferring context: 62B done
#6 DONE 0.1s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
#4 resolve docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60 0.1s done
#4 DONE 0.1s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
#4 DONE 0.1s

                                                
                                                
#5 [2/3] RUN true
#5 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.8s done
#8 exporting manifest sha256:45091912cb4d2103b3e17795a997d905e2bb5bf869cc6a0907b60848a7fee9dc 0.0s done
#8 exporting config sha256:f3f2dff10c11cef404fb385251559f369cbe28d3f13aae3dd49c48a675c98e72
#8 exporting config sha256:f3f2dff10c11cef404fb385251559f369cbe28d3f13aae3dd49c48a675c98e72 0.0s done
#8 naming to localhost/my-image:functional-20210813202056-393438 done
#8 DONE 0.8s
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813202056-393438 -- sudo crictl inspecti localhost/my-image:functional-20210813202056-393438
--- PASS: TestFunctional/parallel/BuildImage (5.36s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210813202056-393438 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210813202056-393438
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo systemctl is-active docker": exit status 1 (220.843315ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo systemctl is-active crio": exit status 1 (235.873258ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813202056-393438 version -o=json --components: (1.358847736s)
--- PASS: TestFunctional/parallel/Version/components (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210813202056-393438 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1245: Took "637.610946ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1259: Took "59.444079ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1295: Took "239.740832ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1308: Took "58.343373ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210813202056-393438 /tmp/mounttest415324876:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628886277609518206" to /tmp/mounttest415324876/created-by-test
functional_test_mount_test.go:110: wrote "test-1628886277609518206" to /tmp/mounttest415324876/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628886277609518206" to /tmp/mounttest415324876/test-1628886277609518206
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.330152ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 13 20:24 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 13 20:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 13 20:24 test-1628886277609518206
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh cat /mount-9p/test-1628886277609518206
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210813202056-393438 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [9c0ac4bf-650c-4a02-b12d-72cad26de531] Pending
helpers_test.go:343: "busybox-mount" [9c0ac4bf-650c-4a02-b12d-72cad26de531] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [9c0ac4bf-650c-4a02-b12d-72cad26de531] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.014401018s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210813202056-393438 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813202056-393438 /tmp/mounttest415324876:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210813202056-393438 /tmp/mounttest551664571:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.336622ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813202056-393438 /tmp/mounttest551664571:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh "sudo umount -f /mount-9p": exit status 1 (243.263507ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20210813202056-393438 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813202056-393438 /tmp/mounttest551664571:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210813202056-393438 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.97.159.119 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210813202056-393438 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210813202056-393438
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210813202056-393438
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210813202056-393438
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210813202056-393438
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210813202657-393438 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210813202657-393438 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.38247ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210813202657-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"5fe41c57-42e5-47dc-b9db-6fb984d01575","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig"},"datacontenttype":"application/json","id":"5201174a-7060-4bd6-95a7-9a33128f941c","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"88fe609c-ac25-463b-8f1d-3cbb1d88b6d6","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube"},"datacontenttype":"application/json","id":"0cbeaa8b-a74f-4e54-a06d-c6f9f3e3d138","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"10bcdca2-d604-4122-b5fb-264567e6c104","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"1b73def4-8448-4d1f-a3fc-f8407b714943","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210813202657-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210813202657-393438
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (147.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202658-393438 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0813 20:26:58.747169  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:29:22.129583  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:29:22.134875  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:29:22.145178  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:29:22.165423  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:29:22.205672  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:29:22.285948  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:29:22.446345  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:29:22.766471  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:29:23.406963  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:29:24.687870  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202658-393438 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m27.386194655s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (147.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- rollout status deployment/busybox
E0813 20:29:27.248670  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:29:32.368890  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
multinode_test.go:467: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- rollout status deployment/busybox: (7.482890285s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- exec busybox-84b6686758-4vnds -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- exec busybox-84b6686758-mschh -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- exec busybox-84b6686758-4vnds -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- exec busybox-84b6686758-mschh -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- exec busybox-84b6686758-4vnds -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- exec busybox-84b6686758-mschh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- exec busybox-84b6686758-4vnds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- exec busybox-84b6686758-4vnds -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- exec busybox-84b6686758-mschh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813202658-393438 -- exec busybox-84b6686758-mschh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813202658-393438 -v 3 --alsologtostderr
E0813 20:29:42.609108  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:30:03.090052  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210813202658-393438 -v 3 --alsologtostderr: (52.536016459s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (1.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 cp testdata/cp-test.txt multinode-20210813202658-393438-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 ssh -n multinode-20210813202658-393438-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 cp testdata/cp-test.txt multinode-20210813202658-393438-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 ssh -n multinode-20210813202658-393438-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (1.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202658-393438 node stop m03: (2.091344225s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202658-393438 status: exit status 7 (411.610876ms)

                                                
                                                
-- stdout --
	multinode-20210813202658-393438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813202658-393438-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813202658-393438-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202658-393438 status --alsologtostderr: exit status 7 (405.777486ms)

                                                
                                                
-- stdout --
	multinode-20210813202658-393438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813202658-393438-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813202658-393438-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:30:34.329720  401141 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:30:34.329804  401141 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:30:34.329814  401141 out.go:311] Setting ErrFile to fd 2...
	I0813 20:30:34.329817  401141 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:30:34.329918  401141 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:30:34.330065  401141 out.go:305] Setting JSON to false
	I0813 20:30:34.330083  401141 mustload.go:65] Loading cluster: multinode-20210813202658-393438
	I0813 20:30:34.330379  401141 config.go:177] Loaded profile config "multinode-20210813202658-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:30:34.330391  401141 status.go:253] checking status of multinode-20210813202658-393438 ...
	I0813 20:30:34.330743  401141 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:30:34.330780  401141 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:30:34.341756  401141 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42467
	I0813 20:30:34.342203  401141 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:30:34.342876  401141 main.go:130] libmachine: Using API Version  1
	I0813 20:30:34.342903  401141 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:30:34.343312  401141 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:30:34.343499  401141 main.go:130] libmachine: (multinode-20210813202658-393438) Calling .GetState
	I0813 20:30:34.346260  401141 status.go:328] multinode-20210813202658-393438 host status = "Running" (err=<nil>)
	I0813 20:30:34.346278  401141 host.go:66] Checking if "multinode-20210813202658-393438" exists ...
	I0813 20:30:34.346576  401141 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:30:34.346607  401141 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:30:34.356931  401141 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37003
	I0813 20:30:34.357312  401141 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:30:34.357752  401141 main.go:130] libmachine: Using API Version  1
	I0813 20:30:34.357770  401141 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:30:34.358078  401141 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:30:34.358313  401141 main.go:130] libmachine: (multinode-20210813202658-393438) Calling .GetIP
	I0813 20:30:34.363501  401141 main.go:130] libmachine: (multinode-20210813202658-393438) DBG | domain multinode-20210813202658-393438 has defined MAC address 52:54:00:df:20:70 in network mk-multinode-20210813202658-393438
	I0813 20:30:34.363879  401141 main.go:130] libmachine: (multinode-20210813202658-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:20:70", ip: ""} in network mk-multinode-20210813202658-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:27:12 +0000 UTC Type:0 Mac:52:54:00:df:20:70 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-20210813202658-393438 Clientid:01:52:54:00:df:20:70}
	I0813 20:30:34.363917  401141 main.go:130] libmachine: (multinode-20210813202658-393438) DBG | domain multinode-20210813202658-393438 has defined IP address 192.168.39.140 and MAC address 52:54:00:df:20:70 in network mk-multinode-20210813202658-393438
	I0813 20:30:34.363971  401141 host.go:66] Checking if "multinode-20210813202658-393438" exists ...
	I0813 20:30:34.364268  401141 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:30:34.364307  401141 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:30:34.374174  401141 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33411
	I0813 20:30:34.374542  401141 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:30:34.374978  401141 main.go:130] libmachine: Using API Version  1
	I0813 20:30:34.375003  401141 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:30:34.375300  401141 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:30:34.375485  401141 main.go:130] libmachine: (multinode-20210813202658-393438) Calling .DriverName
	I0813 20:30:34.375708  401141 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:30:34.375759  401141 main.go:130] libmachine: (multinode-20210813202658-393438) Calling .GetSSHHostname
	I0813 20:30:34.380578  401141 main.go:130] libmachine: (multinode-20210813202658-393438) DBG | domain multinode-20210813202658-393438 has defined MAC address 52:54:00:df:20:70 in network mk-multinode-20210813202658-393438
	I0813 20:30:34.380991  401141 main.go:130] libmachine: (multinode-20210813202658-393438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:20:70", ip: ""} in network mk-multinode-20210813202658-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:27:12 +0000 UTC Type:0 Mac:52:54:00:df:20:70 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-20210813202658-393438 Clientid:01:52:54:00:df:20:70}
	I0813 20:30:34.381021  401141 main.go:130] libmachine: (multinode-20210813202658-393438) DBG | domain multinode-20210813202658-393438 has defined IP address 192.168.39.140 and MAC address 52:54:00:df:20:70 in network mk-multinode-20210813202658-393438
	I0813 20:30:34.381127  401141 main.go:130] libmachine: (multinode-20210813202658-393438) Calling .GetSSHPort
	I0813 20:30:34.381287  401141 main.go:130] libmachine: (multinode-20210813202658-393438) Calling .GetSSHKeyPath
	I0813 20:30:34.381417  401141 main.go:130] libmachine: (multinode-20210813202658-393438) Calling .GetSSHUsername
	I0813 20:30:34.381532  401141 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202658-393438/id_rsa Username:docker}
	I0813 20:30:34.474468  401141 ssh_runner.go:149] Run: systemctl --version
	I0813 20:30:34.479980  401141 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:30:34.490508  401141 kubeconfig.go:93] found "multinode-20210813202658-393438" server: "https://192.168.39.140:8443"
	I0813 20:30:34.490534  401141 api_server.go:164] Checking apiserver status ...
	I0813 20:30:34.490561  401141 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 20:30:34.500076  401141 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2669/cgroup
	I0813 20:30:34.506542  401141 api_server.go:180] apiserver freezer: "3:freezer:/kubepods/burstable/pod52a50868192da9125faf61984f17211a/f2f1a1f239344a377eb294e6f03c6f8cc94854b848bc8be6fd37e8657830d511"
	I0813 20:30:34.506583  401141 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod52a50868192da9125faf61984f17211a/f2f1a1f239344a377eb294e6f03c6f8cc94854b848bc8be6fd37e8657830d511/freezer.state
	I0813 20:30:34.513378  401141 api_server.go:202] freezer state: "THAWED"
	I0813 20:30:34.513398  401141 api_server.go:239] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0813 20:30:34.521737  401141 api_server.go:265] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0813 20:30:34.521758  401141 status.go:419] multinode-20210813202658-393438 apiserver status = Running (err=<nil>)
	I0813 20:30:34.521768  401141 status.go:255] multinode-20210813202658-393438 status: &{Name:multinode-20210813202658-393438 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:30:34.521789  401141 status.go:253] checking status of multinode-20210813202658-393438-m02 ...
	I0813 20:30:34.522216  401141 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:30:34.522262  401141 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:30:34.533019  401141 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33849
	I0813 20:30:34.533390  401141 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:30:34.533843  401141 main.go:130] libmachine: Using API Version  1
	I0813 20:30:34.533862  401141 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:30:34.534202  401141 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:30:34.534378  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) Calling .GetState
	I0813 20:30:34.537119  401141 status.go:328] multinode-20210813202658-393438-m02 host status = "Running" (err=<nil>)
	I0813 20:30:34.537140  401141 host.go:66] Checking if "multinode-20210813202658-393438-m02" exists ...
	I0813 20:30:34.537413  401141 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:30:34.537454  401141 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:30:34.547691  401141 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0813 20:30:34.548036  401141 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:30:34.548420  401141 main.go:130] libmachine: Using API Version  1
	I0813 20:30:34.548436  401141 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:30:34.548735  401141 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:30:34.548908  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) Calling .GetIP
	I0813 20:30:34.553572  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) DBG | domain multinode-20210813202658-393438-m02 has defined MAC address 52:54:00:3c:ae:ce in network mk-multinode-20210813202658-393438
	I0813 20:30:34.553908  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:ae:ce", ip: ""} in network mk-multinode-20210813202658-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:28:42 +0000 UTC Type:0 Mac:52:54:00:3c:ae:ce Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-20210813202658-393438-m02 Clientid:01:52:54:00:3c:ae:ce}
	I0813 20:30:34.553937  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) DBG | domain multinode-20210813202658-393438-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:3c:ae:ce in network mk-multinode-20210813202658-393438
	I0813 20:30:34.554033  401141 host.go:66] Checking if "multinode-20210813202658-393438-m02" exists ...
	I0813 20:30:34.554330  401141 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:30:34.554360  401141 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:30:34.564075  401141 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37269
	I0813 20:30:34.564449  401141 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:30:34.564878  401141 main.go:130] libmachine: Using API Version  1
	I0813 20:30:34.564902  401141 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:30:34.565230  401141 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:30:34.565381  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) Calling .DriverName
	I0813 20:30:34.565563  401141 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 20:30:34.565585  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) Calling .GetSSHHostname
	I0813 20:30:34.570504  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) DBG | domain multinode-20210813202658-393438-m02 has defined MAC address 52:54:00:3c:ae:ce in network mk-multinode-20210813202658-393438
	I0813 20:30:34.570912  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:ae:ce", ip: ""} in network mk-multinode-20210813202658-393438: {Iface:virbr1 ExpiryTime:2021-08-13 21:28:42 +0000 UTC Type:0 Mac:52:54:00:3c:ae:ce Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-20210813202658-393438-m02 Clientid:01:52:54:00:3c:ae:ce}
	I0813 20:30:34.570948  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) DBG | domain multinode-20210813202658-393438-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:3c:ae:ce in network mk-multinode-20210813202658-393438
	I0813 20:30:34.571074  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) Calling .GetSSHPort
	I0813 20:30:34.571215  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) Calling .GetSSHKeyPath
	I0813 20:30:34.571360  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m02) Calling .GetSSHUsername
	I0813 20:30:34.571463  401141 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/machines/multinode-20210813202658-393438-m02/id_rsa Username:docker}
	I0813 20:30:34.657599  401141 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 20:30:34.666912  401141 status.go:255] multinode-20210813202658-393438-m02 status: &{Name:multinode-20210813202658-393438-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:30:34.666935  401141 status.go:253] checking status of multinode-20210813202658-393438-m03 ...
	I0813 20:30:34.667252  401141 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:30:34.667295  401141 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:30:34.678337  401141 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0813 20:30:34.678717  401141 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:30:34.679151  401141 main.go:130] libmachine: Using API Version  1
	I0813 20:30:34.679172  401141 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:30:34.679517  401141 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:30:34.679661  401141 main.go:130] libmachine: (multinode-20210813202658-393438-m03) Calling .GetState
	I0813 20:30:34.682346  401141 status.go:328] multinode-20210813202658-393438-m03 host status = "Stopped" (err=<nil>)
	I0813 20:30:34.682357  401141 status.go:341] host is not running, skipping remaining checks
	I0813 20:30:34.682362  401141 status.go:255] multinode-20210813202658-393438-m03 status: &{Name:multinode-20210813202658-393438-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.91s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (70.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 node start m03 --alsologtostderr
E0813 20:30:44.050773  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:31:29.729281  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
multinode_test.go:235: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202658-393438 node start m03 --alsologtostderr: (1m9.9202272s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (70.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (552.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202658-393438
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210813202658-393438
E0813 20:32:05.974594  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:34:22.130089  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:34:49.815437  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
multinode_test.go:271: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210813202658-393438: (3m6.221824894s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202658-393438 --wait=true -v=8 --alsologtostderr
E0813 20:36:29.729817  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:37:54.107599  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 20:39:22.129990  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202658-393438 --wait=true -v=8 --alsologtostderr: (6m6.479704766s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202658-393438
--- PASS: TestMultiNode/serial/RestartKeepsNodes (552.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202658-393438 node delete m03: (1.486041158s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 status --alsologtostderr
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 stop
E0813 20:41:29.731180  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813202658-393438 stop: (3m4.197792079s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202658-393438 status: exit status 7 (82.247333ms)

                                                
                                                
-- stdout --
	multinode-20210813202658-393438
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813202658-393438-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813202658-393438 status --alsologtostderr: exit status 7 (81.361026ms)

                                                
                                                
-- stdout --
	multinode-20210813202658-393438
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813202658-393438-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:44:04.540867  402346 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:44:04.540961  402346 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:44:04.540977  402346 out.go:311] Setting ErrFile to fd 2...
	I0813 20:44:04.540983  402346 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:44:04.541145  402346 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:44:04.541365  402346 out.go:305] Setting JSON to false
	I0813 20:44:04.541390  402346 mustload.go:65] Loading cluster: multinode-20210813202658-393438
	I0813 20:44:04.541860  402346 config.go:177] Loaded profile config "multinode-20210813202658-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:44:04.541881  402346 status.go:253] checking status of multinode-20210813202658-393438 ...
	I0813 20:44:04.542250  402346 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:44:04.542295  402346 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:44:04.552797  402346 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0813 20:44:04.553277  402346 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:44:04.553812  402346 main.go:130] libmachine: Using API Version  1
	I0813 20:44:04.553839  402346 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:44:04.554287  402346 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:44:04.554467  402346 main.go:130] libmachine: (multinode-20210813202658-393438) Calling .GetState
	I0813 20:44:04.557029  402346 status.go:328] multinode-20210813202658-393438 host status = "Stopped" (err=<nil>)
	I0813 20:44:04.557043  402346 status.go:341] host is not running, skipping remaining checks
	I0813 20:44:04.557047  402346 status.go:255] multinode-20210813202658-393438 status: &{Name:multinode-20210813202658-393438 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 20:44:04.557059  402346 status.go:253] checking status of multinode-20210813202658-393438-m02 ...
	I0813 20:44:04.557445  402346 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0813 20:44:04.557476  402346 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 20:44:04.567612  402346 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0813 20:44:04.568059  402346 main.go:130] libmachine: () Calling .GetVersion
	I0813 20:44:04.568886  402346 main.go:130] libmachine: Using API Version  1
	I0813 20:44:04.568912  402346 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 20:44:04.569687  402346 main.go:130] libmachine: () Calling .GetMachineName
	I0813 20:44:04.569896  402346 main.go:130] libmachine: (multinode-20210813202658-393438-m02) Calling .GetState
	I0813 20:44:04.572420  402346 status.go:328] multinode-20210813202658-393438-m02 host status = "Stopped" (err=<nil>)
	I0813 20:44:04.572437  402346 status.go:341] host is not running, skipping remaining checks
	I0813 20:44:04.572442  402346 status.go:255] multinode-20210813202658-393438-m02 status: &{Name:multinode-20210813202658-393438-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (237.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:335: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202658-393438 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0813 20:44:22.129143  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:45:45.175968  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:46:29.729040  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
multinode_test.go:335: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202658-393438 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m56.634052049s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813202658-393438 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (237.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (61.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813202658-393438
multinode_test.go:433: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202658-393438-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210813202658-393438-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (95.837778ms)

                                                
                                                
-- stdout --
	* [multinode-20210813202658-393438-m02] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210813202658-393438-m02' is duplicated with machine name 'multinode-20210813202658-393438-m02' in profile 'multinode-20210813202658-393438'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813202658-393438-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:441: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813202658-393438-m03 --driver=kvm2  --container-runtime=containerd: (59.907973614s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813202658-393438
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210813202658-393438: exit status 80 (237.227556ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210813202658-393438
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210813202658-393438-m03 already exists in multinode-20210813202658-393438-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210813202658-393438-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210813202658-393438-m03: (1.182340871s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (61.48s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.1s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (11.096690755s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.10s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.82s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
E0813 20:49:22.129847  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.817524121s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.82s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.56s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.555744982s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.56s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.26s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (8.262232016s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.26s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (14.05s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (14.050086614s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (14.05s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (13.4s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (13.401833664s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (13.40s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (13.89s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (13.891290284s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (13.89s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (12.7s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (12.70315979s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (12.70s)

                                                
                                    
x
+
TestPreload (182.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813205038-393438 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.0
E0813 20:51:29.729663  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813205038-393438 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.0: (2m8.928622089s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813205038-393438 -- sudo crictl pull busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210813205038-393438 -- sudo crictl pull busybox: (1.325063842s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813205038-393438 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813205038-393438 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.3: (51.298886371s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813205038-393438 -- sudo crictl image ls
helpers_test.go:176: Cleaning up "test-preload-20210813205038-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210813205038-393438
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210813205038-393438: (1.142483404s)
--- PASS: TestPreload (182.93s)

                                                
                                    
x
+
TestScheduledStopUnix (99.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210813205341-393438 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0813 20:54:22.129232  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 20:54:34.109303  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210813205341-393438 --memory=2048 --driver=kvm2  --container-runtime=containerd: (1m0.756894111s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813205341-393438 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210813205341-393438 -n scheduled-stop-20210813205341-393438
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813205341-393438 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813205341-393438 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813205341-393438 -n scheduled-stop-20210813205341-393438
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813205341-393438
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813205341-393438 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813205341-393438
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210813205341-393438: exit status 7 (71.355838ms)

                                                
                                                
-- stdout --
	scheduled-stop-20210813205341-393438
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813205341-393438 -n scheduled-stop-20210813205341-393438
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813205341-393438 -n scheduled-stop-20210813205341-393438: exit status 7 (64.522554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20210813205341-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210813205341-393438
--- PASS: TestScheduledStopUnix (99.89s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (244.83s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.074752799.exe start -p running-upgrade-20210813205520-393438 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0813 20:56:29.729656  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.16.0.074752799.exe start -p running-upgrade-20210813205520-393438 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m19.109434171s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210813205520-393438 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20210813205520-393438 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m44.033988548s)
helpers_test.go:176: Cleaning up "running-upgrade-20210813205520-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210813205520-393438

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210813205520-393438: (1.213755761s)
--- PASS: TestRunningBinaryUpgrade (244.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (226.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813205735-393438 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813205735-393438 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m23.85457518s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813205735-393438
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813205735-393438: (2.103220463s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210813205735-393438 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210813205735-393438 status --format={{.Host}}: exit status 7 (85.227971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813205735-393438 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813205735-393438 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m37.215632433s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210813205735-393438 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813205735-393438 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813205735-393438 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (146.261465ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210813205735-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210813205735-393438
	    minikube start -p kubernetes-upgrade-20210813205735-393438 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813205735-3934382 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813205735-393438 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813205735-393438 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813205735-393438 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (41.642709698s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210813205735-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813205735-393438
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813205735-393438: (1.41384131s)
--- PASS: TestKubernetesUpgrade (226.55s)

                                                
                                    
x
+
TestPause/serial/Start (153.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813205520-393438 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813205520-393438 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m33.199981293s)
--- PASS: TestPause/serial/Start (153.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813205520-393438 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813205520-393438 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (34.636983538s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:208: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20210813205520-393438
version_upgrade_test.go:208: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20210813205520-393438: (1.506607942s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.51s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.51s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210813205520-393438 --alsologtostderr -v=5
pause_test.go:118: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-20210813205520-393438 --alsologtostderr -v=5: (1.513422664s)
--- PASS: TestPause/serial/Unpause (1.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210813205926-393438 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210813205926-393438 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (307.4715ms)

                                                
                                                
-- stdout --
	* [false-20210813205926-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 20:59:26.162136  430875 out.go:298] Setting OutFile to fd 1 ...
	I0813 20:59:26.162236  430875 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:59:26.162247  430875 out.go:311] Setting ErrFile to fd 2...
	I0813 20:59:26.162252  430875 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 20:59:26.162399  430875 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/bin
	I0813 20:59:26.162763  430875 out.go:305] Setting JSON to false
	I0813 20:59:26.211188  430875 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":6129,"bootTime":1628882238,"procs":199,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 20:59:26.211313  430875 start.go:121] virtualization: kvm guest
	I0813 20:59:26.343539  430875 out.go:177] * [false-20210813205926-393438] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 20:59:26.343710  430875 notify.go:169] Checking for updates...
	I0813 20:59:26.344836  430875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/kubeconfig
	I0813 20:59:26.354169  430875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 20:59:26.355529  430875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube
	I0813 20:59:26.356997  430875 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 20:59:26.357601  430875 config.go:177] Loaded profile config "force-systemd-env-20210813205836-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:59:26.357723  430875 config.go:177] Loaded profile config "kubernetes-upgrade-20210813205735-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0813 20:59:26.357848  430875 config.go:177] Loaded profile config "pause-20210813205520-393438": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0813 20:59:26.357895  430875 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 20:59:26.393974  430875 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 20:59:26.394026  430875 start.go:278] selected driver: kvm2
	I0813 20:59:26.394034  430875 start.go:751] validating driver "kvm2" against <nil>
	I0813 20:59:26.394056  430875 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 20:59:26.396805  430875 out.go:177] 
	W0813 20:59:26.396917  430875 out.go:242] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0813 20:59:26.398695  430875 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210813205926-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210813205926-393438
--- PASS: TestNetworkPlugins/group/false (0.60s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.8s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210813205520-393438 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (171.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813205952-393438 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813205952-393438 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0: (2m51.679789298s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (171.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (151.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210813210044-393438 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210813210044-393438 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (2m31.776999282s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (151.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (125.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813210115-393438 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813210115-393438 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3: (2m5.317663758s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (125.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (107.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813210121-393438 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3
E0813 21:01:29.729449  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 21:02:25.176738  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813210121-393438 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3: (1m47.133801421s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (107.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813205952-393438 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [ce7a5fd9-fc79-11eb-9c66-525400553b5e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [ce7a5fd9-fc79-11eb-9c66-525400553b5e] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.029392506s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813205952-393438 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210813205952-393438 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210813205952-393438 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210813205952-393438 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210813205952-393438 --alsologtostderr -v=3: (1m32.490262382s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210121-393438 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [407ae8f8-713a-4db0-b5b2-627a4cf01b34] Pending
helpers_test.go:343: "busybox" [407ae8f8-713a-4db0-b5b2-627a4cf01b34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [407ae8f8-713a-4db0-b5b2-627a4cf01b34] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.025452808s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210121-393438 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813210044-393438 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [490017d1-09e0-4601-ad19-be51bf5cb881] Pending

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [490017d1-09e0-4601-ad19-be51bf5cb881] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [490017d1-09e0-4601-ad19-be51bf5cb881] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.024009403s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813210044-393438 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210813210121-393438 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210813210121-393438 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.05770942s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210121-393438 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (92.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813210121-393438 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813210121-393438 --alsologtostderr -v=3: (1m32.476691276s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (92.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813210115-393438 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [aac80f07-42e3-4bdd-832b-c26fe68ba7a4] Pending

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:343: "busybox" [aac80f07-42e3-4bdd-832b-c26fe68ba7a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [aac80f07-42e3-4bdd-832b-c26fe68ba7a4] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.027118156s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813210115-393438 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210813210044-393438 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210813210044-393438 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.103909799s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210813210044-393438 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (93.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210813210044-393438 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210813210044-393438 --alsologtostderr -v=3: (1m33.478507161s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (93.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210813210115-393438 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210813210115-393438 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210813210115-393438 --alsologtostderr -v=3
E0813 21:04:22.129275  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210813210115-393438 --alsologtostderr -v=3: (1m32.479566232s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205952-393438 -n old-k8s-version-20210813205952-393438
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205952-393438 -n old-k8s-version-20210813205952-393438: exit status 7 (68.800627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210813205952-393438 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (477.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813205952-393438 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813205952-393438 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0: (7m56.763088631s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813205952-393438 -n old-k8s-version-20210813205952-393438
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (477.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210121-393438 -n default-k8s-different-port-20210813210121-393438
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210121-393438 -n default-k8s-different-port-20210813210121-393438: exit status 7 (79.956611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210813210121-393438 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (428.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813210121-393438 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813210121-393438 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3: (7m8.222989451s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813210121-393438 -n default-k8s-different-port-20210813210121-393438
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (428.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813210044-393438 -n no-preload-20210813210044-393438
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813210044-393438 -n no-preload-20210813210044-393438: exit status 7 (76.759999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210813210044-393438 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (400.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210813210044-393438 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210813210044-393438 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (6m40.187761081s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813210044-393438 -n no-preload-20210813210044-393438
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (400.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813210115-393438 -n embed-certs-20210813210115-393438
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813210115-393438 -n embed-certs-20210813210115-393438: exit status 7 (69.009563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210813210115-393438 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (506.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813210115-393438 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3
E0813 21:06:29.730084  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 21:09:22.129189  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 21:11:14.109778  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory
E0813 21:11:29.729135  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/addons-20210813200824-393438/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813210115-393438 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3: (8m26.280886655s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813210115-393438 -n embed-certs-20210813210115-393438
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (506.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-29b2r" [42ed3d11-7b24-4788-8823-852e5b2ca9ea] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020480612s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-29b2r" [42ed3d11-7b24-4788-8823-852e5b2ca9ea] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011508314s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210813210044-393438 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210813210044-393438 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-6tdsg" [6860364e-45f9-41da-a2c3-763cf331586e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-6tdsg" [6860364e-45f9-41da-a2c3-763cf331586e] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.058289579s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (105.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813211202-393438 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813211202-393438 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (1m45.870359139s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (105.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-6tdsg" [6860364e-45f9-41da-a2c3-763cf331586e] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011664796s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210813210121-393438 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210813210121-393438 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-7pkrv" [0127869d-fc7b-11eb-a3a8-525400553b5e] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01604056s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-7pkrv" [0127869d-fc7b-11eb-a3a8-525400553b5e] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.594063269s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210813205952-393438 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:264: (dbg) Done: kubectl --context old-k8s-version-20210813205952-393438 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (5.373410874s)
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (10.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210813205952-393438 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (126.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210813205925-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210813205925-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd: (2m6.355922679s)
--- PASS: TestNetworkPlugins/group/auto/Start (126.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (201.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd
E0813 21:13:09.460757  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:09.466121  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:09.476972  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:09.497275  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:09.537576  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:09.617928  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:09.778476  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:10.099120  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:10.739616  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:12.020794  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:14.581620  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:17.017922  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
E0813 21:13:18.392935  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
E0813 21:13:18.403213  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
E0813 21:13:18.423494  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
E0813 21:13:18.463622  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
E0813 21:13:18.543844  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
E0813 21:13:18.704501  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
E0813 21:13:19.025118  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
E0813 21:13:19.666165  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
E0813 21:13:19.702556  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:13:20.946601  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
E0813 21:13:23.507076  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
E0813 21:13:28.627343  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd: (3m21.61089572s)
--- PASS: TestNetworkPlugins/group/cilium/Start (201.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-vfmn7" [92209727-a9d1-4943-a8c8-f0d00da0b005] Running
E0813 21:13:29.943209  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.448699946s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-vfmn7" [92209727-a9d1-4943-a8c8-f0d00da0b005] Running
E0813 21:13:38.867847  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.629527635s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210813210115-393438 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210813210115-393438 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210813211202-393438 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210813211202-393438 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.28516359s)
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (4.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210813211202-393438 --alsologtostderr -v=3
E0813 21:13:50.423902  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210813211202-393438 --alsologtostderr -v=3: (4.13761899s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (4.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813211202-393438 -n newest-cni-20210813211202-393438
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813211202-393438 -n newest-cni-20210813211202-393438: exit status 7 (68.271867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210813211202-393438 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (114.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813211202-393438 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813211202-393438 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (1m54.529108289s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813211202-393438 -n newest-cni-20210813211202-393438
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (114.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (118.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd
E0813 21:14:22.129424  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
E0813 21:14:31.384895  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:14:40.309736  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m58.168634823s)
--- PASS: TestNetworkPlugins/group/calico/Start (118.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210813205925-393438 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210813205925-393438 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-twmjp" [f8038064-0454-429d-991e-3e169e1bddfa] Pending
helpers_test.go:343: "netcat-66fbc655d5-twmjp" [f8038064-0454-429d-991e-3e169e1bddfa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-twmjp" [f8038064-0454-429d-991e-3e169e1bddfa] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.035568591s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210813205925-393438 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210813205925-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210813205925-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (88.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=containerd: (1m28.53043921s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (88.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210813211202-393438 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-55xx6" [1ecf6e32-aeca-4e5e-b99c-287a77e01d34] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.025547902s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-jjq8z" [0a90b1ec-cdb5-4a07-9f06-275ede2e4a13] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.024644908s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210813205926-393438 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context calico-20210813205926-393438 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-gl46c" [31faefa5-158b-4560-8b95-198b83b14863] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-gl46c" [31faefa5-158b-4560-8b95-198b83b14863] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-gl46c" [31faefa5-158b-4560-8b95-198b83b14863] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.017582784s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210813205926-393438 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210813205926-393438 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-pxkpc" [69e30fdf-2675-4e8d-a48c-8d403c2c91d8] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-pxkpc" [69e30fdf-2675-4e8d-a48c-8d403c2c91d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-pxkpc" [69e30fdf-2675-4e8d-a48c-8d403c2c91d8] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.02568781s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210813205926-393438 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:181: (dbg) Run:  kubectl --context calico-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:231: (dbg) Run:  kubectl --context calico-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (116.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m56.171710008s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (116.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210813205926-393438 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (117.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p flannel-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m57.144101388s)
--- PASS: TestNetworkPlugins/group/flannel/Start (117.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210813205926-393438 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (10.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210813205926-393438 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-hh95k" [2f26cebe-bf18-4be0-9745-d373da37af17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-hh95k" [2f26cebe-bf18-4be0-9745-d373da37af17] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 10.008573636s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (10.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (147.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m27.323826294s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (147.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (107.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd
E0813 21:17:45.079519  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:17:45.084853  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:17:45.095166  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:17:45.115509  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:17:45.155821  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:17:45.236162  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:17:45.396648  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:17:45.717318  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:17:46.358296  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:17:50.176456  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:17:52.736630  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:17:57.857117  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:18:08.097308  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory
E0813 21:18:09.459916  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory
E0813 21:18:17.017088  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/no-preload-20210813210044-393438/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210813205926-393438 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m47.11654778s)
--- PASS: TestNetworkPlugins/group/bridge/Start (107.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-jvsps" [756d0177-334f-4baa-a186-cd2b9f67bf55] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.017677793s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:343: "kube-flannel-ds-amd64-nksg8" [350cbd46-b034-4d87-a04c-c8e69fddc132] Running
E0813 21:18:28.578449  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.024950605s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210813205926-393438 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210813205926-393438 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-l4hk7" [d3f9e1bb-8f3a-494c-bddb-91a9ba69eb41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-l4hk7" [d3f9e1bb-8f3a-494c-bddb-91a9ba69eb41] Running
E0813 21:18:37.328756  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/default-k8s-different-port-20210813210121-393438/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.01016534s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-20210813205926-393438 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context flannel-20210813205926-393438 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-cs6mw" [ea8e698c-d41d-4512-865e-6c8febfbd2c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-cs6mw" [ea8e698c-d41d-4512-865e-6c8febfbd2c6] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.010660953s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210813205926-393438 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:162: (dbg) Run:  kubectl --context flannel-20210813205926-393438 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:181: (dbg) Run:  kubectl --context flannel-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:231: (dbg) Run:  kubectl --context flannel-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210813205926-393438 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210813205926-393438 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-9bst7" [b1f28414-3a3b-4e25-b7c8-42b98b7533d1] Pending
helpers_test.go:343: "netcat-66fbc655d5-9bst7" [b1f28414-3a3b-4e25-b7c8-42b98b7533d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0813 21:19:05.177696  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-9bst7" [b1f28414-3a3b-4e25-b7c8-42b98b7533d1] Running
E0813 21:19:09.539014  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/old-k8s-version-20210813205952-393438/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.008721001s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210813205926-393438 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210813205926-393438 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-snz5v" [d970bbe2-f92b-4517-a856-bfa20a56e24a] Pending
helpers_test.go:343: "netcat-66fbc655d5-snz5v" [d970bbe2-f92b-4517-a856-bfa20a56e24a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-snz5v" [d970bbe2-f92b-4517-a856-bfa20a56e24a] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.012929861s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210813205926-393438 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813205926-393438 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210813205926-393438 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0813 21:19:22.129148  393438 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-389865-098adff14f97e55ded5626b0a90c858c09622337/.minikube/profiles/functional-20210813202056-393438/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    

Test skip (28/269)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:212: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:286: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210813210121-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210813210121-393438
--- SKIP: TestStartStop/group/disable-driver-mounts (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210813205925-393438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210813205925-393438
--- SKIP: TestNetworkPlugins/group/kubenet (0.30s)

                                                
                                    
Copied to clipboard